Test Report: KVM_Linux_crio 19373

                    
                      afa0c1cf199b27e59d48f8572184259dc9d34cb2:2024-08-06:35664
                    
                

Test fail (14/230)

x
+
TestAddons/parallel/Ingress (151.73s)

                                                
                                                
=== RUN   TestAddons/parallel/Ingress
=== PAUSE TestAddons/parallel/Ingress

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Ingress
addons_test.go:209: (dbg) Run:  kubectl --context addons-435364 wait --for=condition=ready --namespace=ingress-nginx pod --selector=app.kubernetes.io/component=controller --timeout=90s
addons_test.go:234: (dbg) Run:  kubectl --context addons-435364 replace --force -f testdata/nginx-ingress-v1.yaml
addons_test.go:247: (dbg) Run:  kubectl --context addons-435364 replace --force -f testdata/nginx-pod-svc.yaml
addons_test.go:252: (dbg) TestAddons/parallel/Ingress: waiting 8m0s for pods matching "run=nginx" in namespace "default" ...
helpers_test.go:344: "nginx" [f96f1bbf-3982-41a3-94f0-5cab0827ddb3] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:344: "nginx" [f96f1bbf-3982-41a3-94f0-5cab0827ddb3] Running
addons_test.go:252: (dbg) TestAddons/parallel/Ingress: run=nginx healthy within 10.003631867s
addons_test.go:264: (dbg) Run:  out/minikube-linux-amd64 -p addons-435364 ssh "curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'"
addons_test.go:264: (dbg) Non-zero exit: out/minikube-linux-amd64 -p addons-435364 ssh "curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'": exit status 1 (2m10.24544949s)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 28

                                                
                                                
** /stderr **
addons_test.go:280: failed to get expected response from http://127.0.0.1/ within minikube: exit status 1
addons_test.go:288: (dbg) Run:  kubectl --context addons-435364 replace --force -f testdata/ingress-dns-example-v1.yaml
addons_test.go:293: (dbg) Run:  out/minikube-linux-amd64 -p addons-435364 ip
addons_test.go:299: (dbg) Run:  nslookup hello-john.test 192.168.39.129
addons_test.go:308: (dbg) Run:  out/minikube-linux-amd64 -p addons-435364 addons disable ingress-dns --alsologtostderr -v=1
addons_test.go:313: (dbg) Run:  out/minikube-linux-amd64 -p addons-435364 addons disable ingress --alsologtostderr -v=1
addons_test.go:313: (dbg) Done: out/minikube-linux-amd64 -p addons-435364 addons disable ingress --alsologtostderr -v=1: (7.711584206s)
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p addons-435364 -n addons-435364
helpers_test.go:244: <<< TestAddons/parallel/Ingress FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestAddons/parallel/Ingress]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p addons-435364 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p addons-435364 logs -n 25: (1.239680532s)
helpers_test.go:252: TestAddons/parallel/Ingress logs: 
-- stdout --
	
	==> Audit <==
	|---------|---------------------------------------------------------------------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| Command |                                            Args                                             |       Profile        |  User   | Version |     Start Time      |      End Time       |
	|---------|---------------------------------------------------------------------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| delete  | -p download-only-769780                                                                     | download-only-769780 | jenkins | v1.33.1 | 05 Aug 24 22:49 UTC | 05 Aug 24 22:49 UTC |
	| delete  | -p download-only-068196                                                                     | download-only-068196 | jenkins | v1.33.1 | 05 Aug 24 22:49 UTC | 05 Aug 24 22:49 UTC |
	| start   | --download-only -p                                                                          | binary-mirror-208535 | jenkins | v1.33.1 | 05 Aug 24 22:49 UTC |                     |
	|         | binary-mirror-208535                                                                        |                      |         |         |                     |                     |
	|         | --alsologtostderr                                                                           |                      |         |         |                     |                     |
	|         | --binary-mirror                                                                             |                      |         |         |                     |                     |
	|         | http://127.0.0.1:41223                                                                      |                      |         |         |                     |                     |
	|         | --driver=kvm2                                                                               |                      |         |         |                     |                     |
	|         | --container-runtime=crio                                                                    |                      |         |         |                     |                     |
	| delete  | -p binary-mirror-208535                                                                     | binary-mirror-208535 | jenkins | v1.33.1 | 05 Aug 24 22:49 UTC | 05 Aug 24 22:49 UTC |
	| addons  | disable dashboard -p                                                                        | addons-435364        | jenkins | v1.33.1 | 05 Aug 24 22:49 UTC |                     |
	|         | addons-435364                                                                               |                      |         |         |                     |                     |
	| addons  | enable dashboard -p                                                                         | addons-435364        | jenkins | v1.33.1 | 05 Aug 24 22:49 UTC |                     |
	|         | addons-435364                                                                               |                      |         |         |                     |                     |
	| start   | -p addons-435364 --wait=true                                                                | addons-435364        | jenkins | v1.33.1 | 05 Aug 24 22:49 UTC | 05 Aug 24 22:53 UTC |
	|         | --memory=4000 --alsologtostderr                                                             |                      |         |         |                     |                     |
	|         | --addons=registry                                                                           |                      |         |         |                     |                     |
	|         | --addons=metrics-server                                                                     |                      |         |         |                     |                     |
	|         | --addons=volumesnapshots                                                                    |                      |         |         |                     |                     |
	|         | --addons=csi-hostpath-driver                                                                |                      |         |         |                     |                     |
	|         | --addons=gcp-auth                                                                           |                      |         |         |                     |                     |
	|         | --addons=cloud-spanner                                                                      |                      |         |         |                     |                     |
	|         | --addons=inspektor-gadget                                                                   |                      |         |         |                     |                     |
	|         | --addons=storage-provisioner-rancher                                                        |                      |         |         |                     |                     |
	|         | --addons=nvidia-device-plugin                                                               |                      |         |         |                     |                     |
	|         | --addons=yakd --addons=volcano                                                              |                      |         |         |                     |                     |
	|         | --driver=kvm2                                                                               |                      |         |         |                     |                     |
	|         | --container-runtime=crio                                                                    |                      |         |         |                     |                     |
	|         | --addons=ingress                                                                            |                      |         |         |                     |                     |
	|         | --addons=ingress-dns                                                                        |                      |         |         |                     |                     |
	|         | --addons=helm-tiller                                                                        |                      |         |         |                     |                     |
	| addons  | addons-435364 addons disable                                                                | addons-435364        | jenkins | v1.33.1 | 05 Aug 24 22:53 UTC | 05 Aug 24 22:53 UTC |
	|         | gcp-auth --alsologtostderr                                                                  |                      |         |         |                     |                     |
	|         | -v=1                                                                                        |                      |         |         |                     |                     |
	| ssh     | addons-435364 ssh cat                                                                       | addons-435364        | jenkins | v1.33.1 | 05 Aug 24 22:53 UTC | 05 Aug 24 22:53 UTC |
	|         | /opt/local-path-provisioner/pvc-df517976-b98a-4ba5-bb26-cc04d40ee4f9_default_test-pvc/file1 |                      |         |         |                     |                     |
	| addons  | addons-435364 addons disable                                                                | addons-435364        | jenkins | v1.33.1 | 05 Aug 24 22:53 UTC | 05 Aug 24 22:54 UTC |
	|         | storage-provisioner-rancher                                                                 |                      |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                      |         |         |                     |                     |
	| ip      | addons-435364 ip                                                                            | addons-435364        | jenkins | v1.33.1 | 05 Aug 24 22:53 UTC | 05 Aug 24 22:53 UTC |
	| addons  | addons-435364 addons disable                                                                | addons-435364        | jenkins | v1.33.1 | 05 Aug 24 22:53 UTC | 05 Aug 24 22:53 UTC |
	|         | registry --alsologtostderr                                                                  |                      |         |         |                     |                     |
	|         | -v=1                                                                                        |                      |         |         |                     |                     |
	| addons  | addons-435364 addons disable                                                                | addons-435364        | jenkins | v1.33.1 | 05 Aug 24 22:54 UTC | 05 Aug 24 22:54 UTC |
	|         | helm-tiller --alsologtostderr                                                               |                      |         |         |                     |                     |
	|         | -v=1                                                                                        |                      |         |         |                     |                     |
	| addons  | disable nvidia-device-plugin                                                                | addons-435364        | jenkins | v1.33.1 | 05 Aug 24 22:54 UTC | 05 Aug 24 22:54 UTC |
	|         | -p addons-435364                                                                            |                      |         |         |                     |                     |
	| addons  | addons-435364 addons disable                                                                | addons-435364        | jenkins | v1.33.1 | 05 Aug 24 22:54 UTC | 05 Aug 24 22:54 UTC |
	|         | yakd --alsologtostderr -v=1                                                                 |                      |         |         |                     |                     |
	| addons  | disable cloud-spanner -p                                                                    | addons-435364        | jenkins | v1.33.1 | 05 Aug 24 22:54 UTC | 05 Aug 24 22:54 UTC |
	|         | addons-435364                                                                               |                      |         |         |                     |                     |
	| addons  | enable headlamp                                                                             | addons-435364        | jenkins | v1.33.1 | 05 Aug 24 22:54 UTC | 05 Aug 24 22:54 UTC |
	|         | -p addons-435364                                                                            |                      |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                      |         |         |                     |                     |
	| addons  | disable inspektor-gadget -p                                                                 | addons-435364        | jenkins | v1.33.1 | 05 Aug 24 22:54 UTC | 05 Aug 24 22:54 UTC |
	|         | addons-435364                                                                               |                      |         |         |                     |                     |
	| addons  | addons-435364 addons                                                                        | addons-435364        | jenkins | v1.33.1 | 05 Aug 24 22:54 UTC | 05 Aug 24 22:54 UTC |
	|         | disable csi-hostpath-driver                                                                 |                      |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                      |         |         |                     |                     |
	| addons  | addons-435364 addons                                                                        | addons-435364        | jenkins | v1.33.1 | 05 Aug 24 22:54 UTC | 05 Aug 24 22:54 UTC |
	|         | disable volumesnapshots                                                                     |                      |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                      |         |         |                     |                     |
	| addons  | addons-435364 addons disable                                                                | addons-435364        | jenkins | v1.33.1 | 05 Aug 24 22:54 UTC | 05 Aug 24 22:54 UTC |
	|         | headlamp --alsologtostderr                                                                  |                      |         |         |                     |                     |
	|         | -v=1                                                                                        |                      |         |         |                     |                     |
	| ssh     | addons-435364 ssh curl -s                                                                   | addons-435364        | jenkins | v1.33.1 | 05 Aug 24 22:54 UTC |                     |
	|         | http://127.0.0.1/ -H 'Host:                                                                 |                      |         |         |                     |                     |
	|         | nginx.example.com'                                                                          |                      |         |         |                     |                     |
	| ip      | addons-435364 ip                                                                            | addons-435364        | jenkins | v1.33.1 | 05 Aug 24 22:57 UTC | 05 Aug 24 22:57 UTC |
	| addons  | addons-435364 addons disable                                                                | addons-435364        | jenkins | v1.33.1 | 05 Aug 24 22:57 UTC | 05 Aug 24 22:57 UTC |
	|         | ingress-dns --alsologtostderr                                                               |                      |         |         |                     |                     |
	|         | -v=1                                                                                        |                      |         |         |                     |                     |
	| addons  | addons-435364 addons disable                                                                | addons-435364        | jenkins | v1.33.1 | 05 Aug 24 22:57 UTC | 05 Aug 24 22:57 UTC |
	|         | ingress --alsologtostderr -v=1                                                              |                      |         |         |                     |                     |
	|---------|---------------------------------------------------------------------------------------------|----------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/08/05 22:49:48
	Running on machine: ubuntu-20-agent-10
	Binary: Built with gc go1.22.5 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0805 22:49:48.506687   18059 out.go:291] Setting OutFile to fd 1 ...
	I0805 22:49:48.506951   18059 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0805 22:49:48.506961   18059 out.go:304] Setting ErrFile to fd 2...
	I0805 22:49:48.506968   18059 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0805 22:49:48.507203   18059 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19373-9606/.minikube/bin
	I0805 22:49:48.507793   18059 out.go:298] Setting JSON to false
	I0805 22:49:48.508591   18059 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-10","uptime":1934,"bootTime":1722896254,"procs":172,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1065-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0805 22:49:48.508647   18059 start.go:139] virtualization: kvm guest
	I0805 22:49:48.510554   18059 out.go:177] * [addons-435364] minikube v1.33.1 on Ubuntu 20.04 (kvm/amd64)
	I0805 22:49:48.511915   18059 out.go:177]   - MINIKUBE_LOCATION=19373
	I0805 22:49:48.511934   18059 notify.go:220] Checking for updates...
	I0805 22:49:48.514715   18059 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0805 22:49:48.515975   18059 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19373-9606/kubeconfig
	I0805 22:49:48.517159   18059 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19373-9606/.minikube
	I0805 22:49:48.518144   18059 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0805 22:49:48.519283   18059 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0805 22:49:48.520484   18059 driver.go:392] Setting default libvirt URI to qemu:///system
	I0805 22:49:48.551637   18059 out.go:177] * Using the kvm2 driver based on user configuration
	I0805 22:49:48.552951   18059 start.go:297] selected driver: kvm2
	I0805 22:49:48.552970   18059 start.go:901] validating driver "kvm2" against <nil>
	I0805 22:49:48.552988   18059 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0805 22:49:48.553710   18059 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0805 22:49:48.553823   18059 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/19373-9606/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0805 22:49:48.568117   18059 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.33.1
	I0805 22:49:48.568172   18059 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0805 22:49:48.568491   18059 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0805 22:49:48.568525   18059 cni.go:84] Creating CNI manager for ""
	I0805 22:49:48.568534   18059 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0805 22:49:48.568548   18059 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0805 22:49:48.568616   18059 start.go:340] cluster config:
	{Name:addons-435364 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:4000 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.3 ClusterName:addons-435364 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:c
rio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAg
entPID:0 GPUs: AutoPauseInterval:1m0s}
	I0805 22:49:48.568734   18059 iso.go:125] acquiring lock: {Name:mk54a637ed625e04bb2b6adf973b61c976cd6d35 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0805 22:49:48.570806   18059 out.go:177] * Starting "addons-435364" primary control-plane node in "addons-435364" cluster
	I0805 22:49:48.572189   18059 preload.go:131] Checking if preload exists for k8s version v1.30.3 and runtime crio
	I0805 22:49:48.572237   18059 preload.go:146] Found local preload: /home/jenkins/minikube-integration/19373-9606/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-cri-o-overlay-amd64.tar.lz4
	I0805 22:49:48.572248   18059 cache.go:56] Caching tarball of preloaded images
	I0805 22:49:48.572337   18059 preload.go:172] Found /home/jenkins/minikube-integration/19373-9606/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0805 22:49:48.572350   18059 cache.go:59] Finished verifying existence of preloaded tar for v1.30.3 on crio
	I0805 22:49:48.572670   18059 profile.go:143] Saving config to /home/jenkins/minikube-integration/19373-9606/.minikube/profiles/addons-435364/config.json ...
	I0805 22:49:48.572694   18059 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19373-9606/.minikube/profiles/addons-435364/config.json: {Name:mk973d1a7b74d62cfc2a1a5b42c5b5e91a472399 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0805 22:49:48.572847   18059 start.go:360] acquireMachinesLock for addons-435364: {Name:mkd2ba511c39504598222edbf83078b718329186 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0805 22:49:48.572906   18059 start.go:364] duration metric: took 42.285µs to acquireMachinesLock for "addons-435364"
	I0805 22:49:48.572927   18059 start.go:93] Provisioning new machine with config: &{Name:addons-435364 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19339/minikube-v1.33.1-1722248113-19339-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:4000 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kuber
netesVersion:v1.30.3 ClusterName:addons-435364 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 M
ountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0805 22:49:48.573017   18059 start.go:125] createHost starting for "" (driver="kvm2")
	I0805 22:49:48.574792   18059 out.go:204] * Creating kvm2 VM (CPUs=2, Memory=4000MB, Disk=20000MB) ...
	I0805 22:49:48.574960   18059 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0805 22:49:48.575012   18059 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0805 22:49:48.589322   18059 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38181
	I0805 22:49:48.589789   18059 main.go:141] libmachine: () Calling .GetVersion
	I0805 22:49:48.590344   18059 main.go:141] libmachine: Using API Version  1
	I0805 22:49:48.590368   18059 main.go:141] libmachine: () Calling .SetConfigRaw
	I0805 22:49:48.590717   18059 main.go:141] libmachine: () Calling .GetMachineName
	I0805 22:49:48.590946   18059 main.go:141] libmachine: (addons-435364) Calling .GetMachineName
	I0805 22:49:48.591091   18059 main.go:141] libmachine: (addons-435364) Calling .DriverName
	I0805 22:49:48.591253   18059 start.go:159] libmachine.API.Create for "addons-435364" (driver="kvm2")
	I0805 22:49:48.591284   18059 client.go:168] LocalClient.Create starting
	I0805 22:49:48.591329   18059 main.go:141] libmachine: Creating CA: /home/jenkins/minikube-integration/19373-9606/.minikube/certs/ca.pem
	I0805 22:49:48.777977   18059 main.go:141] libmachine: Creating client certificate: /home/jenkins/minikube-integration/19373-9606/.minikube/certs/cert.pem
	I0805 22:49:48.845591   18059 main.go:141] libmachine: Running pre-create checks...
	I0805 22:49:48.845614   18059 main.go:141] libmachine: (addons-435364) Calling .PreCreateCheck
	I0805 22:49:48.846110   18059 main.go:141] libmachine: (addons-435364) Calling .GetConfigRaw
	I0805 22:49:48.846547   18059 main.go:141] libmachine: Creating machine...
	I0805 22:49:48.846561   18059 main.go:141] libmachine: (addons-435364) Calling .Create
	I0805 22:49:48.846702   18059 main.go:141] libmachine: (addons-435364) Creating KVM machine...
	I0805 22:49:48.847962   18059 main.go:141] libmachine: (addons-435364) DBG | found existing default KVM network
	I0805 22:49:48.848688   18059 main.go:141] libmachine: (addons-435364) DBG | I0805 22:49:48.848564   18080 network.go:206] using free private subnet 192.168.39.0/24: &{IP:192.168.39.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.39.0/24 Gateway:192.168.39.1 ClientMin:192.168.39.2 ClientMax:192.168.39.254 Broadcast:192.168.39.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc000015ad0}
	I0805 22:49:48.848704   18059 main.go:141] libmachine: (addons-435364) DBG | created network xml: 
	I0805 22:49:48.848791   18059 main.go:141] libmachine: (addons-435364) DBG | <network>
	I0805 22:49:48.848851   18059 main.go:141] libmachine: (addons-435364) DBG |   <name>mk-addons-435364</name>
	I0805 22:49:48.848925   18059 main.go:141] libmachine: (addons-435364) DBG |   <dns enable='no'/>
	I0805 22:49:48.848950   18059 main.go:141] libmachine: (addons-435364) DBG |   
	I0805 22:49:48.848966   18059 main.go:141] libmachine: (addons-435364) DBG |   <ip address='192.168.39.1' netmask='255.255.255.0'>
	I0805 22:49:48.848980   18059 main.go:141] libmachine: (addons-435364) DBG |     <dhcp>
	I0805 22:49:48.848992   18059 main.go:141] libmachine: (addons-435364) DBG |       <range start='192.168.39.2' end='192.168.39.253'/>
	I0805 22:49:48.849001   18059 main.go:141] libmachine: (addons-435364) DBG |     </dhcp>
	I0805 22:49:48.849009   18059 main.go:141] libmachine: (addons-435364) DBG |   </ip>
	I0805 22:49:48.849016   18059 main.go:141] libmachine: (addons-435364) DBG |   
	I0805 22:49:48.849023   18059 main.go:141] libmachine: (addons-435364) DBG | </network>
	I0805 22:49:48.849032   18059 main.go:141] libmachine: (addons-435364) DBG | 
	I0805 22:49:48.854063   18059 main.go:141] libmachine: (addons-435364) DBG | trying to create private KVM network mk-addons-435364 192.168.39.0/24...
	I0805 22:49:48.915039   18059 main.go:141] libmachine: (addons-435364) DBG | private KVM network mk-addons-435364 192.168.39.0/24 created
	I0805 22:49:48.915081   18059 main.go:141] libmachine: (addons-435364) Setting up store path in /home/jenkins/minikube-integration/19373-9606/.minikube/machines/addons-435364 ...
	I0805 22:49:48.915099   18059 main.go:141] libmachine: (addons-435364) DBG | I0805 22:49:48.914965   18080 common.go:145] Making disk image using store path: /home/jenkins/minikube-integration/19373-9606/.minikube
	I0805 22:49:48.915133   18059 main.go:141] libmachine: (addons-435364) Building disk image from file:///home/jenkins/minikube-integration/19373-9606/.minikube/cache/iso/amd64/minikube-v1.33.1-1722248113-19339-amd64.iso
	I0805 22:49:48.915234   18059 main.go:141] libmachine: (addons-435364) Downloading /home/jenkins/minikube-integration/19373-9606/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/19373-9606/.minikube/cache/iso/amd64/minikube-v1.33.1-1722248113-19339-amd64.iso...
	I0805 22:49:49.168734   18059 main.go:141] libmachine: (addons-435364) DBG | I0805 22:49:49.168581   18080 common.go:152] Creating ssh key: /home/jenkins/minikube-integration/19373-9606/.minikube/machines/addons-435364/id_rsa...
	I0805 22:49:49.322697   18059 main.go:141] libmachine: (addons-435364) DBG | I0805 22:49:49.322569   18080 common.go:158] Creating raw disk image: /home/jenkins/minikube-integration/19373-9606/.minikube/machines/addons-435364/addons-435364.rawdisk...
	I0805 22:49:49.322722   18059 main.go:141] libmachine: (addons-435364) DBG | Writing magic tar header
	I0805 22:49:49.322732   18059 main.go:141] libmachine: (addons-435364) DBG | Writing SSH key tar header
	I0805 22:49:49.322789   18059 main.go:141] libmachine: (addons-435364) DBG | I0805 22:49:49.322748   18080 common.go:172] Fixing permissions on /home/jenkins/minikube-integration/19373-9606/.minikube/machines/addons-435364 ...
	I0805 22:49:49.322876   18059 main.go:141] libmachine: (addons-435364) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19373-9606/.minikube/machines/addons-435364
	I0805 22:49:49.322906   18059 main.go:141] libmachine: (addons-435364) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19373-9606/.minikube/machines
	I0805 22:49:49.322921   18059 main.go:141] libmachine: (addons-435364) Setting executable bit set on /home/jenkins/minikube-integration/19373-9606/.minikube/machines/addons-435364 (perms=drwx------)
	I0805 22:49:49.322927   18059 main.go:141] libmachine: (addons-435364) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19373-9606/.minikube
	I0805 22:49:49.322936   18059 main.go:141] libmachine: (addons-435364) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19373-9606
	I0805 22:49:49.322944   18059 main.go:141] libmachine: (addons-435364) DBG | Checking permissions on dir: /home/jenkins/minikube-integration
	I0805 22:49:49.322951   18059 main.go:141] libmachine: (addons-435364) DBG | Checking permissions on dir: /home/jenkins
	I0805 22:49:49.322963   18059 main.go:141] libmachine: (addons-435364) Setting executable bit set on /home/jenkins/minikube-integration/19373-9606/.minikube/machines (perms=drwxr-xr-x)
	I0805 22:49:49.322970   18059 main.go:141] libmachine: (addons-435364) Setting executable bit set on /home/jenkins/minikube-integration/19373-9606/.minikube (perms=drwxr-xr-x)
	I0805 22:49:49.322977   18059 main.go:141] libmachine: (addons-435364) Setting executable bit set on /home/jenkins/minikube-integration/19373-9606 (perms=drwxrwxr-x)
	I0805 22:49:49.322990   18059 main.go:141] libmachine: (addons-435364) Setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I0805 22:49:49.323000   18059 main.go:141] libmachine: (addons-435364) DBG | Checking permissions on dir: /home
	I0805 22:49:49.323008   18059 main.go:141] libmachine: (addons-435364) Setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I0805 22:49:49.323021   18059 main.go:141] libmachine: (addons-435364) DBG | Skipping /home - not owner
	I0805 22:49:49.323030   18059 main.go:141] libmachine: (addons-435364) Creating domain...
	I0805 22:49:49.324031   18059 main.go:141] libmachine: (addons-435364) define libvirt domain using xml: 
	I0805 22:49:49.324049   18059 main.go:141] libmachine: (addons-435364) <domain type='kvm'>
	I0805 22:49:49.324058   18059 main.go:141] libmachine: (addons-435364)   <name>addons-435364</name>
	I0805 22:49:49.324066   18059 main.go:141] libmachine: (addons-435364)   <memory unit='MiB'>4000</memory>
	I0805 22:49:49.324075   18059 main.go:141] libmachine: (addons-435364)   <vcpu>2</vcpu>
	I0805 22:49:49.324091   18059 main.go:141] libmachine: (addons-435364)   <features>
	I0805 22:49:49.324125   18059 main.go:141] libmachine: (addons-435364)     <acpi/>
	I0805 22:49:49.324155   18059 main.go:141] libmachine: (addons-435364)     <apic/>
	I0805 22:49:49.324179   18059 main.go:141] libmachine: (addons-435364)     <pae/>
	I0805 22:49:49.324194   18059 main.go:141] libmachine: (addons-435364)     
	I0805 22:49:49.324204   18059 main.go:141] libmachine: (addons-435364)   </features>
	I0805 22:49:49.324210   18059 main.go:141] libmachine: (addons-435364)   <cpu mode='host-passthrough'>
	I0805 22:49:49.324231   18059 main.go:141] libmachine: (addons-435364)   
	I0805 22:49:49.324245   18059 main.go:141] libmachine: (addons-435364)   </cpu>
	I0805 22:49:49.324254   18059 main.go:141] libmachine: (addons-435364)   <os>
	I0805 22:49:49.324260   18059 main.go:141] libmachine: (addons-435364)     <type>hvm</type>
	I0805 22:49:49.324269   18059 main.go:141] libmachine: (addons-435364)     <boot dev='cdrom'/>
	I0805 22:49:49.324279   18059 main.go:141] libmachine: (addons-435364)     <boot dev='hd'/>
	I0805 22:49:49.324296   18059 main.go:141] libmachine: (addons-435364)     <bootmenu enable='no'/>
	I0805 22:49:49.324309   18059 main.go:141] libmachine: (addons-435364)   </os>
	I0805 22:49:49.324319   18059 main.go:141] libmachine: (addons-435364)   <devices>
	I0805 22:49:49.324331   18059 main.go:141] libmachine: (addons-435364)     <disk type='file' device='cdrom'>
	I0805 22:49:49.324348   18059 main.go:141] libmachine: (addons-435364)       <source file='/home/jenkins/minikube-integration/19373-9606/.minikube/machines/addons-435364/boot2docker.iso'/>
	I0805 22:49:49.324360   18059 main.go:141] libmachine: (addons-435364)       <target dev='hdc' bus='scsi'/>
	I0805 22:49:49.324372   18059 main.go:141] libmachine: (addons-435364)       <readonly/>
	I0805 22:49:49.324382   18059 main.go:141] libmachine: (addons-435364)     </disk>
	I0805 22:49:49.324392   18059 main.go:141] libmachine: (addons-435364)     <disk type='file' device='disk'>
	I0805 22:49:49.324405   18059 main.go:141] libmachine: (addons-435364)       <driver name='qemu' type='raw' cache='default' io='threads' />
	I0805 22:49:49.324421   18059 main.go:141] libmachine: (addons-435364)       <source file='/home/jenkins/minikube-integration/19373-9606/.minikube/machines/addons-435364/addons-435364.rawdisk'/>
	I0805 22:49:49.324434   18059 main.go:141] libmachine: (addons-435364)       <target dev='hda' bus='virtio'/>
	I0805 22:49:49.324443   18059 main.go:141] libmachine: (addons-435364)     </disk>
	I0805 22:49:49.324458   18059 main.go:141] libmachine: (addons-435364)     <interface type='network'>
	I0805 22:49:49.324472   18059 main.go:141] libmachine: (addons-435364)       <source network='mk-addons-435364'/>
	I0805 22:49:49.324481   18059 main.go:141] libmachine: (addons-435364)       <model type='virtio'/>
	I0805 22:49:49.324493   18059 main.go:141] libmachine: (addons-435364)     </interface>
	I0805 22:49:49.324503   18059 main.go:141] libmachine: (addons-435364)     <interface type='network'>
	I0805 22:49:49.324516   18059 main.go:141] libmachine: (addons-435364)       <source network='default'/>
	I0805 22:49:49.324527   18059 main.go:141] libmachine: (addons-435364)       <model type='virtio'/>
	I0805 22:49:49.324539   18059 main.go:141] libmachine: (addons-435364)     </interface>
	I0805 22:49:49.324550   18059 main.go:141] libmachine: (addons-435364)     <serial type='pty'>
	I0805 22:49:49.324572   18059 main.go:141] libmachine: (addons-435364)       <target port='0'/>
	I0805 22:49:49.324587   18059 main.go:141] libmachine: (addons-435364)     </serial>
	I0805 22:49:49.324599   18059 main.go:141] libmachine: (addons-435364)     <console type='pty'>
	I0805 22:49:49.324610   18059 main.go:141] libmachine: (addons-435364)       <target type='serial' port='0'/>
	I0805 22:49:49.324622   18059 main.go:141] libmachine: (addons-435364)     </console>
	I0805 22:49:49.324633   18059 main.go:141] libmachine: (addons-435364)     <rng model='virtio'>
	I0805 22:49:49.324644   18059 main.go:141] libmachine: (addons-435364)       <backend model='random'>/dev/random</backend>
	I0805 22:49:49.324664   18059 main.go:141] libmachine: (addons-435364)     </rng>
	I0805 22:49:49.324676   18059 main.go:141] libmachine: (addons-435364)     
	I0805 22:49:49.324684   18059 main.go:141] libmachine: (addons-435364)     
	I0805 22:49:49.324696   18059 main.go:141] libmachine: (addons-435364)   </devices>
	I0805 22:49:49.324706   18059 main.go:141] libmachine: (addons-435364) </domain>
	I0805 22:49:49.324719   18059 main.go:141] libmachine: (addons-435364) 
	I0805 22:49:49.330031   18059 main.go:141] libmachine: (addons-435364) DBG | domain addons-435364 has defined MAC address 52:54:00:64:94:3c in network default
	I0805 22:49:49.330509   18059 main.go:141] libmachine: (addons-435364) Ensuring networks are active...
	I0805 22:49:49.330532   18059 main.go:141] libmachine: (addons-435364) DBG | domain addons-435364 has defined MAC address 52:54:00:99:11:e1 in network mk-addons-435364
	I0805 22:49:49.331146   18059 main.go:141] libmachine: (addons-435364) Ensuring network default is active
	I0805 22:49:49.331442   18059 main.go:141] libmachine: (addons-435364) Ensuring network mk-addons-435364 is active
	I0805 22:49:49.331894   18059 main.go:141] libmachine: (addons-435364) Getting domain xml...
	I0805 22:49:49.332619   18059 main.go:141] libmachine: (addons-435364) Creating domain...
	I0805 22:49:50.750593   18059 main.go:141] libmachine: (addons-435364) Waiting to get IP...
	I0805 22:49:50.751328   18059 main.go:141] libmachine: (addons-435364) DBG | domain addons-435364 has defined MAC address 52:54:00:99:11:e1 in network mk-addons-435364
	I0805 22:49:50.751754   18059 main.go:141] libmachine: (addons-435364) DBG | unable to find current IP address of domain addons-435364 in network mk-addons-435364
	I0805 22:49:50.751795   18059 main.go:141] libmachine: (addons-435364) DBG | I0805 22:49:50.751705   18080 retry.go:31] will retry after 214.228264ms: waiting for machine to come up
	I0805 22:49:50.967104   18059 main.go:141] libmachine: (addons-435364) DBG | domain addons-435364 has defined MAC address 52:54:00:99:11:e1 in network mk-addons-435364
	I0805 22:49:50.967520   18059 main.go:141] libmachine: (addons-435364) DBG | unable to find current IP address of domain addons-435364 in network mk-addons-435364
	I0805 22:49:50.967548   18059 main.go:141] libmachine: (addons-435364) DBG | I0805 22:49:50.967480   18080 retry.go:31] will retry after 306.207664ms: waiting for machine to come up
	I0805 22:49:51.274919   18059 main.go:141] libmachine: (addons-435364) DBG | domain addons-435364 has defined MAC address 52:54:00:99:11:e1 in network mk-addons-435364
	I0805 22:49:51.275342   18059 main.go:141] libmachine: (addons-435364) DBG | unable to find current IP address of domain addons-435364 in network mk-addons-435364
	I0805 22:49:51.275381   18059 main.go:141] libmachine: (addons-435364) DBG | I0805 22:49:51.275318   18080 retry.go:31] will retry after 476.689069ms: waiting for machine to come up
	I0805 22:49:51.753916   18059 main.go:141] libmachine: (addons-435364) DBG | domain addons-435364 has defined MAC address 52:54:00:99:11:e1 in network mk-addons-435364
	I0805 22:49:51.754387   18059 main.go:141] libmachine: (addons-435364) DBG | unable to find current IP address of domain addons-435364 in network mk-addons-435364
	I0805 22:49:51.754409   18059 main.go:141] libmachine: (addons-435364) DBG | I0805 22:49:51.754345   18080 retry.go:31] will retry after 606.609457ms: waiting for machine to come up
	I0805 22:49:52.362172   18059 main.go:141] libmachine: (addons-435364) DBG | domain addons-435364 has defined MAC address 52:54:00:99:11:e1 in network mk-addons-435364
	I0805 22:49:52.362574   18059 main.go:141] libmachine: (addons-435364) DBG | unable to find current IP address of domain addons-435364 in network mk-addons-435364
	I0805 22:49:52.362610   18059 main.go:141] libmachine: (addons-435364) DBG | I0805 22:49:52.362542   18080 retry.go:31] will retry after 575.123699ms: waiting for machine to come up
	I0805 22:49:52.939358   18059 main.go:141] libmachine: (addons-435364) DBG | domain addons-435364 has defined MAC address 52:54:00:99:11:e1 in network mk-addons-435364
	I0805 22:49:52.939660   18059 main.go:141] libmachine: (addons-435364) DBG | unable to find current IP address of domain addons-435364 in network mk-addons-435364
	I0805 22:49:52.939684   18059 main.go:141] libmachine: (addons-435364) DBG | I0805 22:49:52.939637   18080 retry.go:31] will retry after 774.827552ms: waiting for machine to come up
	I0805 22:49:53.716066   18059 main.go:141] libmachine: (addons-435364) DBG | domain addons-435364 has defined MAC address 52:54:00:99:11:e1 in network mk-addons-435364
	I0805 22:49:53.716474   18059 main.go:141] libmachine: (addons-435364) DBG | unable to find current IP address of domain addons-435364 in network mk-addons-435364
	I0805 22:49:53.716504   18059 main.go:141] libmachine: (addons-435364) DBG | I0805 22:49:53.716445   18080 retry.go:31] will retry after 1.065801193s: waiting for machine to come up
	I0805 22:49:54.783763   18059 main.go:141] libmachine: (addons-435364) DBG | domain addons-435364 has defined MAC address 52:54:00:99:11:e1 in network mk-addons-435364
	I0805 22:49:54.784199   18059 main.go:141] libmachine: (addons-435364) DBG | unable to find current IP address of domain addons-435364 in network mk-addons-435364
	I0805 22:49:54.784217   18059 main.go:141] libmachine: (addons-435364) DBG | I0805 22:49:54.784166   18080 retry.go:31] will retry after 903.298303ms: waiting for machine to come up
	I0805 22:49:55.689188   18059 main.go:141] libmachine: (addons-435364) DBG | domain addons-435364 has defined MAC address 52:54:00:99:11:e1 in network mk-addons-435364
	I0805 22:49:55.689539   18059 main.go:141] libmachine: (addons-435364) DBG | unable to find current IP address of domain addons-435364 in network mk-addons-435364
	I0805 22:49:55.689565   18059 main.go:141] libmachine: (addons-435364) DBG | I0805 22:49:55.689499   18080 retry.go:31] will retry after 1.568408021s: waiting for machine to come up
	I0805 22:49:57.260214   18059 main.go:141] libmachine: (addons-435364) DBG | domain addons-435364 has defined MAC address 52:54:00:99:11:e1 in network mk-addons-435364
	I0805 22:49:57.260632   18059 main.go:141] libmachine: (addons-435364) DBG | unable to find current IP address of domain addons-435364 in network mk-addons-435364
	I0805 22:49:57.260652   18059 main.go:141] libmachine: (addons-435364) DBG | I0805 22:49:57.260613   18080 retry.go:31] will retry after 2.221891592s: waiting for machine to come up
	I0805 22:49:59.484039   18059 main.go:141] libmachine: (addons-435364) DBG | domain addons-435364 has defined MAC address 52:54:00:99:11:e1 in network mk-addons-435364
	I0805 22:49:59.484439   18059 main.go:141] libmachine: (addons-435364) DBG | unable to find current IP address of domain addons-435364 in network mk-addons-435364
	I0805 22:49:59.484472   18059 main.go:141] libmachine: (addons-435364) DBG | I0805 22:49:59.484362   18080 retry.go:31] will retry after 2.439349351s: waiting for machine to come up
	I0805 22:50:01.926995   18059 main.go:141] libmachine: (addons-435364) DBG | domain addons-435364 has defined MAC address 52:54:00:99:11:e1 in network mk-addons-435364
	I0805 22:50:01.927430   18059 main.go:141] libmachine: (addons-435364) DBG | unable to find current IP address of domain addons-435364 in network mk-addons-435364
	I0805 22:50:01.927452   18059 main.go:141] libmachine: (addons-435364) DBG | I0805 22:50:01.927393   18080 retry.go:31] will retry after 2.459070989s: waiting for machine to come up
	I0805 22:50:04.388244   18059 main.go:141] libmachine: (addons-435364) DBG | domain addons-435364 has defined MAC address 52:54:00:99:11:e1 in network mk-addons-435364
	I0805 22:50:04.388626   18059 main.go:141] libmachine: (addons-435364) DBG | unable to find current IP address of domain addons-435364 in network mk-addons-435364
	I0805 22:50:04.388646   18059 main.go:141] libmachine: (addons-435364) DBG | I0805 22:50:04.388589   18080 retry.go:31] will retry after 3.49088023s: waiting for machine to come up
	I0805 22:50:07.880582   18059 main.go:141] libmachine: (addons-435364) DBG | domain addons-435364 has defined MAC address 52:54:00:99:11:e1 in network mk-addons-435364
	I0805 22:50:07.880947   18059 main.go:141] libmachine: (addons-435364) DBG | unable to find current IP address of domain addons-435364 in network mk-addons-435364
	I0805 22:50:07.880973   18059 main.go:141] libmachine: (addons-435364) DBG | I0805 22:50:07.880896   18080 retry.go:31] will retry after 4.573943769s: waiting for machine to come up
	I0805 22:50:12.459645   18059 main.go:141] libmachine: (addons-435364) DBG | domain addons-435364 has defined MAC address 52:54:00:99:11:e1 in network mk-addons-435364
	I0805 22:50:12.460081   18059 main.go:141] libmachine: (addons-435364) Found IP for machine: 192.168.39.129
	I0805 22:50:12.460103   18059 main.go:141] libmachine: (addons-435364) Reserving static IP address...
	I0805 22:50:12.460116   18059 main.go:141] libmachine: (addons-435364) DBG | domain addons-435364 has current primary IP address 192.168.39.129 and MAC address 52:54:00:99:11:e1 in network mk-addons-435364
	I0805 22:50:12.460438   18059 main.go:141] libmachine: (addons-435364) DBG | unable to find host DHCP lease matching {name: "addons-435364", mac: "52:54:00:99:11:e1", ip: "192.168.39.129"} in network mk-addons-435364
	I0805 22:50:12.531547   18059 main.go:141] libmachine: (addons-435364) Reserved static IP address: 192.168.39.129
	I0805 22:50:12.531580   18059 main.go:141] libmachine: (addons-435364) DBG | Getting to WaitForSSH function...
	I0805 22:50:12.531589   18059 main.go:141] libmachine: (addons-435364) Waiting for SSH to be available...
	I0805 22:50:12.534126   18059 main.go:141] libmachine: (addons-435364) DBG | domain addons-435364 has defined MAC address 52:54:00:99:11:e1 in network mk-addons-435364
	I0805 22:50:12.534561   18059 main.go:141] libmachine: (addons-435364) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:99:11:e1", ip: ""} in network mk-addons-435364: {Iface:virbr1 ExpiryTime:2024-08-05 23:50:03 +0000 UTC Type:0 Mac:52:54:00:99:11:e1 Iaid: IPaddr:192.168.39.129 Prefix:24 Hostname:minikube Clientid:01:52:54:00:99:11:e1}
	I0805 22:50:12.534589   18059 main.go:141] libmachine: (addons-435364) DBG | domain addons-435364 has defined IP address 192.168.39.129 and MAC address 52:54:00:99:11:e1 in network mk-addons-435364
	I0805 22:50:12.534834   18059 main.go:141] libmachine: (addons-435364) DBG | Using SSH client type: external
	I0805 22:50:12.534857   18059 main.go:141] libmachine: (addons-435364) DBG | Using SSH private key: /home/jenkins/minikube-integration/19373-9606/.minikube/machines/addons-435364/id_rsa (-rw-------)
	I0805 22:50:12.534899   18059 main.go:141] libmachine: (addons-435364) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.129 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19373-9606/.minikube/machines/addons-435364/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0805 22:50:12.534917   18059 main.go:141] libmachine: (addons-435364) DBG | About to run SSH command:
	I0805 22:50:12.534926   18059 main.go:141] libmachine: (addons-435364) DBG | exit 0
	I0805 22:50:12.667265   18059 main.go:141] libmachine: (addons-435364) DBG | SSH cmd err, output: <nil>: 
	I0805 22:50:12.667562   18059 main.go:141] libmachine: (addons-435364) KVM machine creation complete!
	I0805 22:50:12.667890   18059 main.go:141] libmachine: (addons-435364) Calling .GetConfigRaw
	I0805 22:50:12.668441   18059 main.go:141] libmachine: (addons-435364) Calling .DriverName
	I0805 22:50:12.668643   18059 main.go:141] libmachine: (addons-435364) Calling .DriverName
	I0805 22:50:12.668810   18059 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I0805 22:50:12.668824   18059 main.go:141] libmachine: (addons-435364) Calling .GetState
	I0805 22:50:12.669869   18059 main.go:141] libmachine: Detecting operating system of created instance...
	I0805 22:50:12.669881   18059 main.go:141] libmachine: Waiting for SSH to be available...
	I0805 22:50:12.669886   18059 main.go:141] libmachine: Getting to WaitForSSH function...
	I0805 22:50:12.669891   18059 main.go:141] libmachine: (addons-435364) Calling .GetSSHHostname
	I0805 22:50:12.672381   18059 main.go:141] libmachine: (addons-435364) DBG | domain addons-435364 has defined MAC address 52:54:00:99:11:e1 in network mk-addons-435364
	I0805 22:50:12.672722   18059 main.go:141] libmachine: (addons-435364) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:99:11:e1", ip: ""} in network mk-addons-435364: {Iface:virbr1 ExpiryTime:2024-08-05 23:50:03 +0000 UTC Type:0 Mac:52:54:00:99:11:e1 Iaid: IPaddr:192.168.39.129 Prefix:24 Hostname:addons-435364 Clientid:01:52:54:00:99:11:e1}
	I0805 22:50:12.672769   18059 main.go:141] libmachine: (addons-435364) DBG | domain addons-435364 has defined IP address 192.168.39.129 and MAC address 52:54:00:99:11:e1 in network mk-addons-435364
	I0805 22:50:12.672897   18059 main.go:141] libmachine: (addons-435364) Calling .GetSSHPort
	I0805 22:50:12.673061   18059 main.go:141] libmachine: (addons-435364) Calling .GetSSHKeyPath
	I0805 22:50:12.673205   18059 main.go:141] libmachine: (addons-435364) Calling .GetSSHKeyPath
	I0805 22:50:12.673332   18059 main.go:141] libmachine: (addons-435364) Calling .GetSSHUsername
	I0805 22:50:12.673480   18059 main.go:141] libmachine: Using SSH client type: native
	I0805 22:50:12.673641   18059 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.129 22 <nil> <nil>}
	I0805 22:50:12.673650   18059 main.go:141] libmachine: About to run SSH command:
	exit 0
	I0805 22:50:12.778666   18059 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0805 22:50:12.778694   18059 main.go:141] libmachine: Detecting the provisioner...
	I0805 22:50:12.778703   18059 main.go:141] libmachine: (addons-435364) Calling .GetSSHHostname
	I0805 22:50:12.781514   18059 main.go:141] libmachine: (addons-435364) DBG | domain addons-435364 has defined MAC address 52:54:00:99:11:e1 in network mk-addons-435364
	I0805 22:50:12.782004   18059 main.go:141] libmachine: (addons-435364) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:99:11:e1", ip: ""} in network mk-addons-435364: {Iface:virbr1 ExpiryTime:2024-08-05 23:50:03 +0000 UTC Type:0 Mac:52:54:00:99:11:e1 Iaid: IPaddr:192.168.39.129 Prefix:24 Hostname:addons-435364 Clientid:01:52:54:00:99:11:e1}
	I0805 22:50:12.782038   18059 main.go:141] libmachine: (addons-435364) DBG | domain addons-435364 has defined IP address 192.168.39.129 and MAC address 52:54:00:99:11:e1 in network mk-addons-435364
	I0805 22:50:12.782180   18059 main.go:141] libmachine: (addons-435364) Calling .GetSSHPort
	I0805 22:50:12.782390   18059 main.go:141] libmachine: (addons-435364) Calling .GetSSHKeyPath
	I0805 22:50:12.782592   18059 main.go:141] libmachine: (addons-435364) Calling .GetSSHKeyPath
	I0805 22:50:12.782768   18059 main.go:141] libmachine: (addons-435364) Calling .GetSSHUsername
	I0805 22:50:12.783104   18059 main.go:141] libmachine: Using SSH client type: native
	I0805 22:50:12.783294   18059 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.129 22 <nil> <nil>}
	I0805 22:50:12.783306   18059 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I0805 22:50:12.887921   18059 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2023.02.9-dirty
	ID=buildroot
	VERSION_ID=2023.02.9
	PRETTY_NAME="Buildroot 2023.02.9"
	
	I0805 22:50:12.888035   18059 main.go:141] libmachine: found compatible host: buildroot
	I0805 22:50:12.888053   18059 main.go:141] libmachine: Provisioning with buildroot...
	I0805 22:50:12.888064   18059 main.go:141] libmachine: (addons-435364) Calling .GetMachineName
	I0805 22:50:12.888331   18059 buildroot.go:166] provisioning hostname "addons-435364"
	I0805 22:50:12.888358   18059 main.go:141] libmachine: (addons-435364) Calling .GetMachineName
	I0805 22:50:12.888560   18059 main.go:141] libmachine: (addons-435364) Calling .GetSSHHostname
	I0805 22:50:12.891036   18059 main.go:141] libmachine: (addons-435364) DBG | domain addons-435364 has defined MAC address 52:54:00:99:11:e1 in network mk-addons-435364
	I0805 22:50:12.891368   18059 main.go:141] libmachine: (addons-435364) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:99:11:e1", ip: ""} in network mk-addons-435364: {Iface:virbr1 ExpiryTime:2024-08-05 23:50:03 +0000 UTC Type:0 Mac:52:54:00:99:11:e1 Iaid: IPaddr:192.168.39.129 Prefix:24 Hostname:addons-435364 Clientid:01:52:54:00:99:11:e1}
	I0805 22:50:12.891391   18059 main.go:141] libmachine: (addons-435364) DBG | domain addons-435364 has defined IP address 192.168.39.129 and MAC address 52:54:00:99:11:e1 in network mk-addons-435364
	I0805 22:50:12.891551   18059 main.go:141] libmachine: (addons-435364) Calling .GetSSHPort
	I0805 22:50:12.891727   18059 main.go:141] libmachine: (addons-435364) Calling .GetSSHKeyPath
	I0805 22:50:12.891890   18059 main.go:141] libmachine: (addons-435364) Calling .GetSSHKeyPath
	I0805 22:50:12.892026   18059 main.go:141] libmachine: (addons-435364) Calling .GetSSHUsername
	I0805 22:50:12.892199   18059 main.go:141] libmachine: Using SSH client type: native
	I0805 22:50:12.892447   18059 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.129 22 <nil> <nil>}
	I0805 22:50:12.892464   18059 main.go:141] libmachine: About to run SSH command:
	sudo hostname addons-435364 && echo "addons-435364" | sudo tee /etc/hostname
	I0805 22:50:13.010217   18059 main.go:141] libmachine: SSH cmd err, output: <nil>: addons-435364
	
	I0805 22:50:13.010241   18059 main.go:141] libmachine: (addons-435364) Calling .GetSSHHostname
	I0805 22:50:13.012956   18059 main.go:141] libmachine: (addons-435364) DBG | domain addons-435364 has defined MAC address 52:54:00:99:11:e1 in network mk-addons-435364
	I0805 22:50:13.013254   18059 main.go:141] libmachine: (addons-435364) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:99:11:e1", ip: ""} in network mk-addons-435364: {Iface:virbr1 ExpiryTime:2024-08-05 23:50:03 +0000 UTC Type:0 Mac:52:54:00:99:11:e1 Iaid: IPaddr:192.168.39.129 Prefix:24 Hostname:addons-435364 Clientid:01:52:54:00:99:11:e1}
	I0805 22:50:13.013270   18059 main.go:141] libmachine: (addons-435364) DBG | domain addons-435364 has defined IP address 192.168.39.129 and MAC address 52:54:00:99:11:e1 in network mk-addons-435364
	I0805 22:50:13.013412   18059 main.go:141] libmachine: (addons-435364) Calling .GetSSHPort
	I0805 22:50:13.013605   18059 main.go:141] libmachine: (addons-435364) Calling .GetSSHKeyPath
	I0805 22:50:13.013769   18059 main.go:141] libmachine: (addons-435364) Calling .GetSSHKeyPath
	I0805 22:50:13.013943   18059 main.go:141] libmachine: (addons-435364) Calling .GetSSHUsername
	I0805 22:50:13.014074   18059 main.go:141] libmachine: Using SSH client type: native
	I0805 22:50:13.014260   18059 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.129 22 <nil> <nil>}
	I0805 22:50:13.014275   18059 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\saddons-435364' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 addons-435364/g' /etc/hosts;
				else 
					echo '127.0.1.1 addons-435364' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0805 22:50:13.131877   18059 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0805 22:50:13.131907   18059 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19373-9606/.minikube CaCertPath:/home/jenkins/minikube-integration/19373-9606/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19373-9606/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19373-9606/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19373-9606/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19373-9606/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19373-9606/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19373-9606/.minikube}
	I0805 22:50:13.131950   18059 buildroot.go:174] setting up certificates
	I0805 22:50:13.131963   18059 provision.go:84] configureAuth start
	I0805 22:50:13.131980   18059 main.go:141] libmachine: (addons-435364) Calling .GetMachineName
	I0805 22:50:13.132283   18059 main.go:141] libmachine: (addons-435364) Calling .GetIP
	I0805 22:50:13.134856   18059 main.go:141] libmachine: (addons-435364) DBG | domain addons-435364 has defined MAC address 52:54:00:99:11:e1 in network mk-addons-435364
	I0805 22:50:13.135215   18059 main.go:141] libmachine: (addons-435364) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:99:11:e1", ip: ""} in network mk-addons-435364: {Iface:virbr1 ExpiryTime:2024-08-05 23:50:03 +0000 UTC Type:0 Mac:52:54:00:99:11:e1 Iaid: IPaddr:192.168.39.129 Prefix:24 Hostname:addons-435364 Clientid:01:52:54:00:99:11:e1}
	I0805 22:50:13.135247   18059 main.go:141] libmachine: (addons-435364) DBG | domain addons-435364 has defined IP address 192.168.39.129 and MAC address 52:54:00:99:11:e1 in network mk-addons-435364
	I0805 22:50:13.135438   18059 main.go:141] libmachine: (addons-435364) Calling .GetSSHHostname
	I0805 22:50:13.137936   18059 main.go:141] libmachine: (addons-435364) DBG | domain addons-435364 has defined MAC address 52:54:00:99:11:e1 in network mk-addons-435364
	I0805 22:50:13.138352   18059 main.go:141] libmachine: (addons-435364) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:99:11:e1", ip: ""} in network mk-addons-435364: {Iface:virbr1 ExpiryTime:2024-08-05 23:50:03 +0000 UTC Type:0 Mac:52:54:00:99:11:e1 Iaid: IPaddr:192.168.39.129 Prefix:24 Hostname:addons-435364 Clientid:01:52:54:00:99:11:e1}
	I0805 22:50:13.138367   18059 main.go:141] libmachine: (addons-435364) DBG | domain addons-435364 has defined IP address 192.168.39.129 and MAC address 52:54:00:99:11:e1 in network mk-addons-435364
	I0805 22:50:13.138549   18059 provision.go:143] copyHostCerts
	I0805 22:50:13.138633   18059 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19373-9606/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19373-9606/.minikube/ca.pem (1082 bytes)
	I0805 22:50:13.138778   18059 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19373-9606/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19373-9606/.minikube/cert.pem (1123 bytes)
	I0805 22:50:13.138846   18059 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19373-9606/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19373-9606/.minikube/key.pem (1679 bytes)
	I0805 22:50:13.138916   18059 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19373-9606/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19373-9606/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19373-9606/.minikube/certs/ca-key.pem org=jenkins.addons-435364 san=[127.0.0.1 192.168.39.129 addons-435364 localhost minikube]
	I0805 22:50:13.252392   18059 provision.go:177] copyRemoteCerts
	I0805 22:50:13.252452   18059 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0805 22:50:13.252475   18059 main.go:141] libmachine: (addons-435364) Calling .GetSSHHostname
	I0805 22:50:13.255186   18059 main.go:141] libmachine: (addons-435364) DBG | domain addons-435364 has defined MAC address 52:54:00:99:11:e1 in network mk-addons-435364
	I0805 22:50:13.255488   18059 main.go:141] libmachine: (addons-435364) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:99:11:e1", ip: ""} in network mk-addons-435364: {Iface:virbr1 ExpiryTime:2024-08-05 23:50:03 +0000 UTC Type:0 Mac:52:54:00:99:11:e1 Iaid: IPaddr:192.168.39.129 Prefix:24 Hostname:addons-435364 Clientid:01:52:54:00:99:11:e1}
	I0805 22:50:13.255522   18059 main.go:141] libmachine: (addons-435364) DBG | domain addons-435364 has defined IP address 192.168.39.129 and MAC address 52:54:00:99:11:e1 in network mk-addons-435364
	I0805 22:50:13.255756   18059 main.go:141] libmachine: (addons-435364) Calling .GetSSHPort
	I0805 22:50:13.256003   18059 main.go:141] libmachine: (addons-435364) Calling .GetSSHKeyPath
	I0805 22:50:13.256161   18059 main.go:141] libmachine: (addons-435364) Calling .GetSSHUsername
	I0805 22:50:13.256359   18059 sshutil.go:53] new ssh client: &{IP:192.168.39.129 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19373-9606/.minikube/machines/addons-435364/id_rsa Username:docker}
	I0805 22:50:13.337367   18059 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19373-9606/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0805 22:50:13.364380   18059 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19373-9606/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0805 22:50:13.390303   18059 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19373-9606/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I0805 22:50:13.414478   18059 provision.go:87] duration metric: took 282.497674ms to configureAuth
	I0805 22:50:13.414504   18059 buildroot.go:189] setting minikube options for container-runtime
	I0805 22:50:13.414670   18059 config.go:182] Loaded profile config "addons-435364": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0805 22:50:13.414755   18059 main.go:141] libmachine: (addons-435364) Calling .GetSSHHostname
	I0805 22:50:13.417135   18059 main.go:141] libmachine: (addons-435364) DBG | domain addons-435364 has defined MAC address 52:54:00:99:11:e1 in network mk-addons-435364
	I0805 22:50:13.417454   18059 main.go:141] libmachine: (addons-435364) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:99:11:e1", ip: ""} in network mk-addons-435364: {Iface:virbr1 ExpiryTime:2024-08-05 23:50:03 +0000 UTC Type:0 Mac:52:54:00:99:11:e1 Iaid: IPaddr:192.168.39.129 Prefix:24 Hostname:addons-435364 Clientid:01:52:54:00:99:11:e1}
	I0805 22:50:13.417482   18059 main.go:141] libmachine: (addons-435364) DBG | domain addons-435364 has defined IP address 192.168.39.129 and MAC address 52:54:00:99:11:e1 in network mk-addons-435364
	I0805 22:50:13.417628   18059 main.go:141] libmachine: (addons-435364) Calling .GetSSHPort
	I0805 22:50:13.417793   18059 main.go:141] libmachine: (addons-435364) Calling .GetSSHKeyPath
	I0805 22:50:13.417964   18059 main.go:141] libmachine: (addons-435364) Calling .GetSSHKeyPath
	I0805 22:50:13.418107   18059 main.go:141] libmachine: (addons-435364) Calling .GetSSHUsername
	I0805 22:50:13.418289   18059 main.go:141] libmachine: Using SSH client type: native
	I0805 22:50:13.418442   18059 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.129 22 <nil> <nil>}
	I0805 22:50:13.418457   18059 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0805 22:50:13.688306   18059 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0805 22:50:13.688328   18059 main.go:141] libmachine: Checking connection to Docker...
	I0805 22:50:13.688336   18059 main.go:141] libmachine: (addons-435364) Calling .GetURL
	I0805 22:50:13.689629   18059 main.go:141] libmachine: (addons-435364) DBG | Using libvirt version 6000000
	I0805 22:50:13.692003   18059 main.go:141] libmachine: (addons-435364) DBG | domain addons-435364 has defined MAC address 52:54:00:99:11:e1 in network mk-addons-435364
	I0805 22:50:13.692365   18059 main.go:141] libmachine: (addons-435364) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:99:11:e1", ip: ""} in network mk-addons-435364: {Iface:virbr1 ExpiryTime:2024-08-05 23:50:03 +0000 UTC Type:0 Mac:52:54:00:99:11:e1 Iaid: IPaddr:192.168.39.129 Prefix:24 Hostname:addons-435364 Clientid:01:52:54:00:99:11:e1}
	I0805 22:50:13.692395   18059 main.go:141] libmachine: (addons-435364) DBG | domain addons-435364 has defined IP address 192.168.39.129 and MAC address 52:54:00:99:11:e1 in network mk-addons-435364
	I0805 22:50:13.692515   18059 main.go:141] libmachine: Docker is up and running!
	I0805 22:50:13.692530   18059 main.go:141] libmachine: Reticulating splines...
	I0805 22:50:13.692538   18059 client.go:171] duration metric: took 25.101243283s to LocalClient.Create
	I0805 22:50:13.692560   18059 start.go:167] duration metric: took 25.101307848s to libmachine.API.Create "addons-435364"
	I0805 22:50:13.692568   18059 start.go:293] postStartSetup for "addons-435364" (driver="kvm2")
	I0805 22:50:13.692576   18059 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0805 22:50:13.692593   18059 main.go:141] libmachine: (addons-435364) Calling .DriverName
	I0805 22:50:13.692798   18059 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0805 22:50:13.692822   18059 main.go:141] libmachine: (addons-435364) Calling .GetSSHHostname
	I0805 22:50:13.695008   18059 main.go:141] libmachine: (addons-435364) DBG | domain addons-435364 has defined MAC address 52:54:00:99:11:e1 in network mk-addons-435364
	I0805 22:50:13.695365   18059 main.go:141] libmachine: (addons-435364) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:99:11:e1", ip: ""} in network mk-addons-435364: {Iface:virbr1 ExpiryTime:2024-08-05 23:50:03 +0000 UTC Type:0 Mac:52:54:00:99:11:e1 Iaid: IPaddr:192.168.39.129 Prefix:24 Hostname:addons-435364 Clientid:01:52:54:00:99:11:e1}
	I0805 22:50:13.695387   18059 main.go:141] libmachine: (addons-435364) DBG | domain addons-435364 has defined IP address 192.168.39.129 and MAC address 52:54:00:99:11:e1 in network mk-addons-435364
	I0805 22:50:13.695540   18059 main.go:141] libmachine: (addons-435364) Calling .GetSSHPort
	I0805 22:50:13.695731   18059 main.go:141] libmachine: (addons-435364) Calling .GetSSHKeyPath
	I0805 22:50:13.695899   18059 main.go:141] libmachine: (addons-435364) Calling .GetSSHUsername
	I0805 22:50:13.696038   18059 sshutil.go:53] new ssh client: &{IP:192.168.39.129 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19373-9606/.minikube/machines/addons-435364/id_rsa Username:docker}
	I0805 22:50:13.777738   18059 ssh_runner.go:195] Run: cat /etc/os-release
	I0805 22:50:13.782406   18059 info.go:137] Remote host: Buildroot 2023.02.9
	I0805 22:50:13.782428   18059 filesync.go:126] Scanning /home/jenkins/minikube-integration/19373-9606/.minikube/addons for local assets ...
	I0805 22:50:13.782495   18059 filesync.go:126] Scanning /home/jenkins/minikube-integration/19373-9606/.minikube/files for local assets ...
	I0805 22:50:13.782517   18059 start.go:296] duration metric: took 89.945033ms for postStartSetup
	I0805 22:50:13.782547   18059 main.go:141] libmachine: (addons-435364) Calling .GetConfigRaw
	I0805 22:50:13.783152   18059 main.go:141] libmachine: (addons-435364) Calling .GetIP
	I0805 22:50:13.785627   18059 main.go:141] libmachine: (addons-435364) DBG | domain addons-435364 has defined MAC address 52:54:00:99:11:e1 in network mk-addons-435364
	I0805 22:50:13.786019   18059 main.go:141] libmachine: (addons-435364) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:99:11:e1", ip: ""} in network mk-addons-435364: {Iface:virbr1 ExpiryTime:2024-08-05 23:50:03 +0000 UTC Type:0 Mac:52:54:00:99:11:e1 Iaid: IPaddr:192.168.39.129 Prefix:24 Hostname:addons-435364 Clientid:01:52:54:00:99:11:e1}
	I0805 22:50:13.786044   18059 main.go:141] libmachine: (addons-435364) DBG | domain addons-435364 has defined IP address 192.168.39.129 and MAC address 52:54:00:99:11:e1 in network mk-addons-435364
	I0805 22:50:13.786223   18059 profile.go:143] Saving config to /home/jenkins/minikube-integration/19373-9606/.minikube/profiles/addons-435364/config.json ...
	I0805 22:50:13.786402   18059 start.go:128] duration metric: took 25.213374021s to createHost
	I0805 22:50:13.786423   18059 main.go:141] libmachine: (addons-435364) Calling .GetSSHHostname
	I0805 22:50:13.788655   18059 main.go:141] libmachine: (addons-435364) DBG | domain addons-435364 has defined MAC address 52:54:00:99:11:e1 in network mk-addons-435364
	I0805 22:50:13.788971   18059 main.go:141] libmachine: (addons-435364) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:99:11:e1", ip: ""} in network mk-addons-435364: {Iface:virbr1 ExpiryTime:2024-08-05 23:50:03 +0000 UTC Type:0 Mac:52:54:00:99:11:e1 Iaid: IPaddr:192.168.39.129 Prefix:24 Hostname:addons-435364 Clientid:01:52:54:00:99:11:e1}
	I0805 22:50:13.788997   18059 main.go:141] libmachine: (addons-435364) DBG | domain addons-435364 has defined IP address 192.168.39.129 and MAC address 52:54:00:99:11:e1 in network mk-addons-435364
	I0805 22:50:13.789136   18059 main.go:141] libmachine: (addons-435364) Calling .GetSSHPort
	I0805 22:50:13.789313   18059 main.go:141] libmachine: (addons-435364) Calling .GetSSHKeyPath
	I0805 22:50:13.789473   18059 main.go:141] libmachine: (addons-435364) Calling .GetSSHKeyPath
	I0805 22:50:13.789591   18059 main.go:141] libmachine: (addons-435364) Calling .GetSSHUsername
	I0805 22:50:13.789755   18059 main.go:141] libmachine: Using SSH client type: native
	I0805 22:50:13.789961   18059 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.129 22 <nil> <nil>}
	I0805 22:50:13.789974   18059 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0805 22:50:13.895900   18059 main.go:141] libmachine: SSH cmd err, output: <nil>: 1722898213.874725931
	
	I0805 22:50:13.895920   18059 fix.go:216] guest clock: 1722898213.874725931
	I0805 22:50:13.895927   18059 fix.go:229] Guest: 2024-08-05 22:50:13.874725931 +0000 UTC Remote: 2024-08-05 22:50:13.78641307 +0000 UTC m=+25.311597235 (delta=88.312861ms)
	I0805 22:50:13.895975   18059 fix.go:200] guest clock delta is within tolerance: 88.312861ms
	I0805 22:50:13.895982   18059 start.go:83] releasing machines lock for "addons-435364", held for 25.32306573s
	I0805 22:50:13.896004   18059 main.go:141] libmachine: (addons-435364) Calling .DriverName
	I0805 22:50:13.896238   18059 main.go:141] libmachine: (addons-435364) Calling .GetIP
	I0805 22:50:13.898897   18059 main.go:141] libmachine: (addons-435364) DBG | domain addons-435364 has defined MAC address 52:54:00:99:11:e1 in network mk-addons-435364
	I0805 22:50:13.899211   18059 main.go:141] libmachine: (addons-435364) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:99:11:e1", ip: ""} in network mk-addons-435364: {Iface:virbr1 ExpiryTime:2024-08-05 23:50:03 +0000 UTC Type:0 Mac:52:54:00:99:11:e1 Iaid: IPaddr:192.168.39.129 Prefix:24 Hostname:addons-435364 Clientid:01:52:54:00:99:11:e1}
	I0805 22:50:13.899238   18059 main.go:141] libmachine: (addons-435364) DBG | domain addons-435364 has defined IP address 192.168.39.129 and MAC address 52:54:00:99:11:e1 in network mk-addons-435364
	I0805 22:50:13.899363   18059 main.go:141] libmachine: (addons-435364) Calling .DriverName
	I0805 22:50:13.899813   18059 main.go:141] libmachine: (addons-435364) Calling .DriverName
	I0805 22:50:13.899988   18059 main.go:141] libmachine: (addons-435364) Calling .DriverName
	I0805 22:50:13.900078   18059 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0805 22:50:13.900118   18059 main.go:141] libmachine: (addons-435364) Calling .GetSSHHostname
	I0805 22:50:13.900248   18059 ssh_runner.go:195] Run: cat /version.json
	I0805 22:50:13.900275   18059 main.go:141] libmachine: (addons-435364) Calling .GetSSHHostname
	I0805 22:50:13.902706   18059 main.go:141] libmachine: (addons-435364) DBG | domain addons-435364 has defined MAC address 52:54:00:99:11:e1 in network mk-addons-435364
	I0805 22:50:13.902818   18059 main.go:141] libmachine: (addons-435364) DBG | domain addons-435364 has defined MAC address 52:54:00:99:11:e1 in network mk-addons-435364
	I0805 22:50:13.903061   18059 main.go:141] libmachine: (addons-435364) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:99:11:e1", ip: ""} in network mk-addons-435364: {Iface:virbr1 ExpiryTime:2024-08-05 23:50:03 +0000 UTC Type:0 Mac:52:54:00:99:11:e1 Iaid: IPaddr:192.168.39.129 Prefix:24 Hostname:addons-435364 Clientid:01:52:54:00:99:11:e1}
	I0805 22:50:13.903087   18059 main.go:141] libmachine: (addons-435364) DBG | domain addons-435364 has defined IP address 192.168.39.129 and MAC address 52:54:00:99:11:e1 in network mk-addons-435364
	I0805 22:50:13.903204   18059 main.go:141] libmachine: (addons-435364) Calling .GetSSHPort
	I0805 22:50:13.903339   18059 main.go:141] libmachine: (addons-435364) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:99:11:e1", ip: ""} in network mk-addons-435364: {Iface:virbr1 ExpiryTime:2024-08-05 23:50:03 +0000 UTC Type:0 Mac:52:54:00:99:11:e1 Iaid: IPaddr:192.168.39.129 Prefix:24 Hostname:addons-435364 Clientid:01:52:54:00:99:11:e1}
	I0805 22:50:13.903364   18059 main.go:141] libmachine: (addons-435364) DBG | domain addons-435364 has defined IP address 192.168.39.129 and MAC address 52:54:00:99:11:e1 in network mk-addons-435364
	I0805 22:50:13.903343   18059 main.go:141] libmachine: (addons-435364) Calling .GetSSHKeyPath
	I0805 22:50:13.903506   18059 main.go:141] libmachine: (addons-435364) Calling .GetSSHPort
	I0805 22:50:13.903513   18059 main.go:141] libmachine: (addons-435364) Calling .GetSSHUsername
	I0805 22:50:13.903655   18059 main.go:141] libmachine: (addons-435364) Calling .GetSSHKeyPath
	I0805 22:50:13.903683   18059 sshutil.go:53] new ssh client: &{IP:192.168.39.129 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19373-9606/.minikube/machines/addons-435364/id_rsa Username:docker}
	I0805 22:50:13.903766   18059 main.go:141] libmachine: (addons-435364) Calling .GetSSHUsername
	I0805 22:50:13.903892   18059 sshutil.go:53] new ssh client: &{IP:192.168.39.129 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19373-9606/.minikube/machines/addons-435364/id_rsa Username:docker}
	I0805 22:50:13.980141   18059 ssh_runner.go:195] Run: systemctl --version
	I0805 22:50:14.016703   18059 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0805 22:50:14.174612   18059 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0805 22:50:14.182545   18059 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0805 22:50:14.182608   18059 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0805 22:50:14.198797   18059 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0805 22:50:14.198828   18059 start.go:495] detecting cgroup driver to use...
	I0805 22:50:14.198900   18059 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0805 22:50:14.214476   18059 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0805 22:50:14.229311   18059 docker.go:217] disabling cri-docker service (if available) ...
	I0805 22:50:14.229375   18059 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0805 22:50:14.243770   18059 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0805 22:50:14.258687   18059 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0805 22:50:14.372937   18059 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0805 22:50:14.517046   18059 docker.go:233] disabling docker service ...
	I0805 22:50:14.517137   18059 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0805 22:50:14.531702   18059 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0805 22:50:14.545769   18059 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0805 22:50:14.680419   18059 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0805 22:50:14.804056   18059 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0805 22:50:14.818562   18059 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0805 22:50:14.837034   18059 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I0805 22:50:14.837097   18059 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0805 22:50:14.847638   18059 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0805 22:50:14.847695   18059 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0805 22:50:14.858409   18059 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0805 22:50:14.868814   18059 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0805 22:50:14.879613   18059 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0805 22:50:14.890285   18059 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0805 22:50:14.900822   18059 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0805 22:50:14.918278   18059 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0805 22:50:14.928807   18059 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0805 22:50:14.938917   18059 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0805 22:50:14.938983   18059 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0805 22:50:14.953935   18059 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0805 22:50:14.964287   18059 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0805 22:50:15.080272   18059 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0805 22:50:15.215676   18059 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0805 22:50:15.215769   18059 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0805 22:50:15.220884   18059 start.go:563] Will wait 60s for crictl version
	I0805 22:50:15.220959   18059 ssh_runner.go:195] Run: which crictl
	I0805 22:50:15.225092   18059 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0805 22:50:15.269151   18059 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0805 22:50:15.269266   18059 ssh_runner.go:195] Run: crio --version
	I0805 22:50:15.298048   18059 ssh_runner.go:195] Run: crio --version
	I0805 22:50:15.329359   18059 out.go:177] * Preparing Kubernetes v1.30.3 on CRI-O 1.29.1 ...
	I0805 22:50:15.330464   18059 main.go:141] libmachine: (addons-435364) Calling .GetIP
	I0805 22:50:15.333196   18059 main.go:141] libmachine: (addons-435364) DBG | domain addons-435364 has defined MAC address 52:54:00:99:11:e1 in network mk-addons-435364
	I0805 22:50:15.333615   18059 main.go:141] libmachine: (addons-435364) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:99:11:e1", ip: ""} in network mk-addons-435364: {Iface:virbr1 ExpiryTime:2024-08-05 23:50:03 +0000 UTC Type:0 Mac:52:54:00:99:11:e1 Iaid: IPaddr:192.168.39.129 Prefix:24 Hostname:addons-435364 Clientid:01:52:54:00:99:11:e1}
	I0805 22:50:15.333643   18059 main.go:141] libmachine: (addons-435364) DBG | domain addons-435364 has defined IP address 192.168.39.129 and MAC address 52:54:00:99:11:e1 in network mk-addons-435364
	I0805 22:50:15.333872   18059 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0805 22:50:15.338208   18059 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0805 22:50:15.351724   18059 kubeadm.go:883] updating cluster {Name:addons-435364 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19339/minikube-v1.33.1-1722248113-19339-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:4000 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.
3 ClusterName:addons-435364 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.129 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountT
ype:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0805 22:50:15.351836   18059 preload.go:131] Checking if preload exists for k8s version v1.30.3 and runtime crio
	I0805 22:50:15.351876   18059 ssh_runner.go:195] Run: sudo crictl images --output json
	I0805 22:50:15.385565   18059 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.30.3". assuming images are not preloaded.
	I0805 22:50:15.385622   18059 ssh_runner.go:195] Run: which lz4
	I0805 22:50:15.389587   18059 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I0805 22:50:15.393856   18059 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0805 22:50:15.393885   18059 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19373-9606/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (406200976 bytes)
	I0805 22:50:16.732385   18059 crio.go:462] duration metric: took 1.34282579s to copy over tarball
	I0805 22:50:16.732456   18059 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0805 22:50:19.022271   18059 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.289790303s)
	I0805 22:50:19.022296   18059 crio.go:469] duration metric: took 2.289881495s to extract the tarball
	I0805 22:50:19.022303   18059 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0805 22:50:19.061425   18059 ssh_runner.go:195] Run: sudo crictl images --output json
	I0805 22:50:19.102433   18059 crio.go:514] all images are preloaded for cri-o runtime.
	I0805 22:50:19.102456   18059 cache_images.go:84] Images are preloaded, skipping loading
	I0805 22:50:19.102466   18059 kubeadm.go:934] updating node { 192.168.39.129 8443 v1.30.3 crio true true} ...
	I0805 22:50:19.102557   18059 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.30.3/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=addons-435364 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.129
	
	[Install]
	 config:
	{KubernetesVersion:v1.30.3 ClusterName:addons-435364 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0805 22:50:19.102623   18059 ssh_runner.go:195] Run: crio config
	I0805 22:50:19.155632   18059 cni.go:84] Creating CNI manager for ""
	I0805 22:50:19.155652   18059 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0805 22:50:19.155662   18059 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0805 22:50:19.155683   18059 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.129 APIServerPort:8443 KubernetesVersion:v1.30.3 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:addons-435364 NodeName:addons-435364 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.129"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.129 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/k
ubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0805 22:50:19.155811   18059 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.129
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "addons-435364"
	  kubeletExtraArgs:
	    node-ip: 192.168.39.129
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.129"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.30.3
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0805 22:50:19.155874   18059 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.30.3
	I0805 22:50:19.165888   18059 binaries.go:44] Found k8s binaries, skipping transfer
	I0805 22:50:19.165963   18059 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0805 22:50:19.175277   18059 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (313 bytes)
	I0805 22:50:19.192493   18059 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0805 22:50:19.208928   18059 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2157 bytes)
	I0805 22:50:19.225460   18059 ssh_runner.go:195] Run: grep 192.168.39.129	control-plane.minikube.internal$ /etc/hosts
	I0805 22:50:19.229318   18059 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.129	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0805 22:50:19.241095   18059 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0805 22:50:19.362185   18059 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0805 22:50:19.379532   18059 certs.go:68] Setting up /home/jenkins/minikube-integration/19373-9606/.minikube/profiles/addons-435364 for IP: 192.168.39.129
	I0805 22:50:19.379556   18059 certs.go:194] generating shared ca certs ...
	I0805 22:50:19.379577   18059 certs.go:226] acquiring lock for ca certs: {Name:mkf35a042c1656d191f542eee7fa087aad4d29d8 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0805 22:50:19.379723   18059 certs.go:240] generating "minikubeCA" ca cert: /home/jenkins/minikube-integration/19373-9606/.minikube/ca.key
	I0805 22:50:19.477775   18059 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19373-9606/.minikube/ca.crt ...
	I0805 22:50:19.477804   18059 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19373-9606/.minikube/ca.crt: {Name:mk5a02f51dff7ee2438dcf787168bbc744fdc790 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0805 22:50:19.477977   18059 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19373-9606/.minikube/ca.key ...
	I0805 22:50:19.477991   18059 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19373-9606/.minikube/ca.key: {Name:mkfd2741899892a506c886eae840074b2142988d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0805 22:50:19.478087   18059 certs.go:240] generating "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19373-9606/.minikube/proxy-client-ca.key
	I0805 22:50:19.567461   18059 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19373-9606/.minikube/proxy-client-ca.crt ...
	I0805 22:50:19.567489   18059 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19373-9606/.minikube/proxy-client-ca.crt: {Name:mk5879e0ceae46d834ba04a385271f59c818cb7f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0805 22:50:19.567659   18059 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19373-9606/.minikube/proxy-client-ca.key ...
	I0805 22:50:19.567673   18059 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19373-9606/.minikube/proxy-client-ca.key: {Name:mk49a52ce17f5d704f71d551b9fec2c09707cba4 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0805 22:50:19.567765   18059 certs.go:256] generating profile certs ...
	I0805 22:50:19.567837   18059 certs.go:363] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/19373-9606/.minikube/profiles/addons-435364/client.key
	I0805 22:50:19.567855   18059 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19373-9606/.minikube/profiles/addons-435364/client.crt with IP's: []
	I0805 22:50:19.735776   18059 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19373-9606/.minikube/profiles/addons-435364/client.crt ...
	I0805 22:50:19.735828   18059 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19373-9606/.minikube/profiles/addons-435364/client.crt: {Name:mka04aa8f5aebff03fdcb9f309b7f635eb1fd742 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0805 22:50:19.736004   18059 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19373-9606/.minikube/profiles/addons-435364/client.key ...
	I0805 22:50:19.736018   18059 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19373-9606/.minikube/profiles/addons-435364/client.key: {Name:mkc50683ce65cc98818fb6ea611c4e350f4aa4ba Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0805 22:50:19.736115   18059 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/19373-9606/.minikube/profiles/addons-435364/apiserver.key.4a22a0e7
	I0805 22:50:19.736134   18059 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19373-9606/.minikube/profiles/addons-435364/apiserver.crt.4a22a0e7 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.39.129]
	I0805 22:50:19.914172   18059 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19373-9606/.minikube/profiles/addons-435364/apiserver.crt.4a22a0e7 ...
	I0805 22:50:19.914202   18059 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19373-9606/.minikube/profiles/addons-435364/apiserver.crt.4a22a0e7: {Name:mkdea0de3134b785bd45cea7b22b0f2fba2ef2b4 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0805 22:50:19.914375   18059 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19373-9606/.minikube/profiles/addons-435364/apiserver.key.4a22a0e7 ...
	I0805 22:50:19.914391   18059 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19373-9606/.minikube/profiles/addons-435364/apiserver.key.4a22a0e7: {Name:mkc82a40128b9bfeccfb6506850f6c0fbad6215f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0805 22:50:19.914487   18059 certs.go:381] copying /home/jenkins/minikube-integration/19373-9606/.minikube/profiles/addons-435364/apiserver.crt.4a22a0e7 -> /home/jenkins/minikube-integration/19373-9606/.minikube/profiles/addons-435364/apiserver.crt
	I0805 22:50:19.914586   18059 certs.go:385] copying /home/jenkins/minikube-integration/19373-9606/.minikube/profiles/addons-435364/apiserver.key.4a22a0e7 -> /home/jenkins/minikube-integration/19373-9606/.minikube/profiles/addons-435364/apiserver.key
	I0805 22:50:19.914637   18059 certs.go:363] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/19373-9606/.minikube/profiles/addons-435364/proxy-client.key
	I0805 22:50:19.914662   18059 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19373-9606/.minikube/profiles/addons-435364/proxy-client.crt with IP's: []
	I0805 22:50:20.035665   18059 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19373-9606/.minikube/profiles/addons-435364/proxy-client.crt ...
	I0805 22:50:20.035694   18059 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19373-9606/.minikube/profiles/addons-435364/proxy-client.crt: {Name:mk057d4c8848c939368271362943917ddd178d9e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0805 22:50:20.035870   18059 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19373-9606/.minikube/profiles/addons-435364/proxy-client.key ...
	I0805 22:50:20.035883   18059 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19373-9606/.minikube/profiles/addons-435364/proxy-client.key: {Name:mkfd1b82b140a28dc229a1ce2c7e53ec16a877a1 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0805 22:50:20.036080   18059 certs.go:484] found cert: /home/jenkins/minikube-integration/19373-9606/.minikube/certs/ca-key.pem (1679 bytes)
	I0805 22:50:20.036113   18059 certs.go:484] found cert: /home/jenkins/minikube-integration/19373-9606/.minikube/certs/ca.pem (1082 bytes)
	I0805 22:50:20.036136   18059 certs.go:484] found cert: /home/jenkins/minikube-integration/19373-9606/.minikube/certs/cert.pem (1123 bytes)
	I0805 22:50:20.036158   18059 certs.go:484] found cert: /home/jenkins/minikube-integration/19373-9606/.minikube/certs/key.pem (1679 bytes)
	I0805 22:50:20.036726   18059 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19373-9606/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0805 22:50:20.066052   18059 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19373-9606/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0805 22:50:20.094029   18059 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19373-9606/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0805 22:50:20.118884   18059 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19373-9606/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0805 22:50:20.146757   18059 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19373-9606/.minikube/profiles/addons-435364/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1419 bytes)
	I0805 22:50:20.173214   18059 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19373-9606/.minikube/profiles/addons-435364/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0805 22:50:20.198268   18059 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19373-9606/.minikube/profiles/addons-435364/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0805 22:50:20.223942   18059 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19373-9606/.minikube/profiles/addons-435364/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0805 22:50:20.247802   18059 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19373-9606/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0805 22:50:20.276375   18059 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0805 22:50:20.293288   18059 ssh_runner.go:195] Run: openssl version
	I0805 22:50:20.299618   18059 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0805 22:50:20.311271   18059 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0805 22:50:20.315868   18059 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Aug  5 22:50 /usr/share/ca-certificates/minikubeCA.pem
	I0805 22:50:20.315920   18059 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0805 22:50:20.321840   18059 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0805 22:50:20.332834   18059 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0805 22:50:20.337260   18059 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0805 22:50:20.337302   18059 kubeadm.go:392] StartCluster: {Name:addons-435364 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19339/minikube-v1.33.1-1722248113-19339-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:4000 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.3 C
lusterName:addons-435364 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.129 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType
:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0805 22:50:20.337366   18059 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0805 22:50:20.337405   18059 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0805 22:50:20.375953   18059 cri.go:89] found id: ""
	I0805 22:50:20.376022   18059 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0805 22:50:20.386995   18059 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0805 22:50:20.397281   18059 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0805 22:50:20.407493   18059 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0805 22:50:20.407510   18059 kubeadm.go:157] found existing configuration files:
	
	I0805 22:50:20.407547   18059 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0805 22:50:20.416692   18059 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0805 22:50:20.416748   18059 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0805 22:50:20.426696   18059 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0805 22:50:20.436221   18059 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0805 22:50:20.436275   18059 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0805 22:50:20.446349   18059 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0805 22:50:20.455826   18059 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0805 22:50:20.455890   18059 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0805 22:50:20.466797   18059 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0805 22:50:20.476183   18059 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0805 22:50:20.476242   18059 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0805 22:50:20.485645   18059 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0805 22:50:20.677617   18059 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0805 22:50:30.354732   18059 kubeadm.go:310] [init] Using Kubernetes version: v1.30.3
	I0805 22:50:30.354813   18059 kubeadm.go:310] [preflight] Running pre-flight checks
	I0805 22:50:30.354918   18059 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0805 22:50:30.355046   18059 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0805 22:50:30.355205   18059 kubeadm.go:310] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0805 22:50:30.355323   18059 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0805 22:50:30.357294   18059 out.go:204]   - Generating certificates and keys ...
	I0805 22:50:30.357380   18059 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0805 22:50:30.357453   18059 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0805 22:50:30.357532   18059 kubeadm.go:310] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0805 22:50:30.357612   18059 kubeadm.go:310] [certs] Generating "front-proxy-ca" certificate and key
	I0805 22:50:30.357689   18059 kubeadm.go:310] [certs] Generating "front-proxy-client" certificate and key
	I0805 22:50:30.357732   18059 kubeadm.go:310] [certs] Generating "etcd/ca" certificate and key
	I0805 22:50:30.357778   18059 kubeadm.go:310] [certs] Generating "etcd/server" certificate and key
	I0805 22:50:30.357902   18059 kubeadm.go:310] [certs] etcd/server serving cert is signed for DNS names [addons-435364 localhost] and IPs [192.168.39.129 127.0.0.1 ::1]
	I0805 22:50:30.357996   18059 kubeadm.go:310] [certs] Generating "etcd/peer" certificate and key
	I0805 22:50:30.358128   18059 kubeadm.go:310] [certs] etcd/peer serving cert is signed for DNS names [addons-435364 localhost] and IPs [192.168.39.129 127.0.0.1 ::1]
	I0805 22:50:30.358218   18059 kubeadm.go:310] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0805 22:50:30.358324   18059 kubeadm.go:310] [certs] Generating "apiserver-etcd-client" certificate and key
	I0805 22:50:30.358394   18059 kubeadm.go:310] [certs] Generating "sa" key and public key
	I0805 22:50:30.358477   18059 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0805 22:50:30.358551   18059 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0805 22:50:30.358605   18059 kubeadm.go:310] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0805 22:50:30.358653   18059 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0805 22:50:30.358711   18059 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0805 22:50:30.358757   18059 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0805 22:50:30.358831   18059 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0805 22:50:30.358892   18059 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0805 22:50:30.360250   18059 out.go:204]   - Booting up control plane ...
	I0805 22:50:30.360329   18059 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0805 22:50:30.360391   18059 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0805 22:50:30.360449   18059 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0805 22:50:30.360539   18059 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0805 22:50:30.360625   18059 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0805 22:50:30.360675   18059 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0805 22:50:30.360779   18059 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I0805 22:50:30.360848   18059 kubeadm.go:310] [kubelet-check] Waiting for a healthy kubelet. This can take up to 4m0s
	I0805 22:50:30.360902   18059 kubeadm.go:310] [kubelet-check] The kubelet is healthy after 501.991644ms
	I0805 22:50:30.360963   18059 kubeadm.go:310] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I0805 22:50:30.361014   18059 kubeadm.go:310] [api-check] The API server is healthy after 5.001813165s
	I0805 22:50:30.361119   18059 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0805 22:50:30.361241   18059 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0805 22:50:30.361291   18059 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I0805 22:50:30.361476   18059 kubeadm.go:310] [mark-control-plane] Marking the node addons-435364 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0805 22:50:30.361541   18059 kubeadm.go:310] [bootstrap-token] Using token: 9pphx7.k9i7quxpukmqio93
	I0805 22:50:30.363130   18059 out.go:204]   - Configuring RBAC rules ...
	I0805 22:50:30.363250   18059 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0805 22:50:30.363323   18059 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0805 22:50:30.363440   18059 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0805 22:50:30.363558   18059 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0805 22:50:30.363704   18059 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0805 22:50:30.363854   18059 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0805 22:50:30.363977   18059 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0805 22:50:30.364016   18059 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I0805 22:50:30.364054   18059 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I0805 22:50:30.364060   18059 kubeadm.go:310] 
	I0805 22:50:30.364110   18059 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I0805 22:50:30.364117   18059 kubeadm.go:310] 
	I0805 22:50:30.364189   18059 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I0805 22:50:30.364195   18059 kubeadm.go:310] 
	I0805 22:50:30.364237   18059 kubeadm.go:310]   mkdir -p $HOME/.kube
	I0805 22:50:30.364294   18059 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0805 22:50:30.364339   18059 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0805 22:50:30.364343   18059 kubeadm.go:310] 
	I0805 22:50:30.364388   18059 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I0805 22:50:30.364394   18059 kubeadm.go:310] 
	I0805 22:50:30.364432   18059 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0805 22:50:30.364438   18059 kubeadm.go:310] 
	I0805 22:50:30.364518   18059 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I0805 22:50:30.364630   18059 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0805 22:50:30.364738   18059 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0805 22:50:30.364750   18059 kubeadm.go:310] 
	I0805 22:50:30.364856   18059 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I0805 22:50:30.364959   18059 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I0805 22:50:30.364967   18059 kubeadm.go:310] 
	I0805 22:50:30.365069   18059 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8443 --token 9pphx7.k9i7quxpukmqio93 \
	I0805 22:50:30.365191   18059 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:80c3f4a7caafd825f47d5f536053424d1d775e8da247cc5594b6b717e711fcd3 \
	I0805 22:50:30.365221   18059 kubeadm.go:310] 	--control-plane 
	I0805 22:50:30.365230   18059 kubeadm.go:310] 
	I0805 22:50:30.365326   18059 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I0805 22:50:30.365334   18059 kubeadm.go:310] 
	I0805 22:50:30.365433   18059 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8443 --token 9pphx7.k9i7quxpukmqio93 \
	I0805 22:50:30.365565   18059 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:80c3f4a7caafd825f47d5f536053424d1d775e8da247cc5594b6b717e711fcd3 
	I0805 22:50:30.365578   18059 cni.go:84] Creating CNI manager for ""
	I0805 22:50:30.365584   18059 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0805 22:50:30.366958   18059 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0805 22:50:30.368185   18059 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0805 22:50:30.379494   18059 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0805 22:50:30.401379   18059 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0805 22:50:30.401470   18059 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0805 22:50:30.401516   18059 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes addons-435364 minikube.k8s.io/updated_at=2024_08_05T22_50_30_0700 minikube.k8s.io/version=v1.33.1 minikube.k8s.io/commit=a179f531dd2dbe55e0d6074abcbc378280f91bb4 minikube.k8s.io/name=addons-435364 minikube.k8s.io/primary=true
	I0805 22:50:30.436537   18059 ops.go:34] apiserver oom_adj: -16
	I0805 22:50:30.523378   18059 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0805 22:50:31.023814   18059 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0805 22:50:31.524347   18059 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0805 22:50:32.023994   18059 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0805 22:50:32.523422   18059 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0805 22:50:33.024071   18059 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0805 22:50:33.523414   18059 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0805 22:50:34.024392   18059 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0805 22:50:34.524445   18059 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0805 22:50:35.024275   18059 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0805 22:50:35.524431   18059 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0805 22:50:36.023992   18059 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0805 22:50:36.523641   18059 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0805 22:50:37.023827   18059 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0805 22:50:37.523715   18059 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0805 22:50:38.024469   18059 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0805 22:50:38.524038   18059 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0805 22:50:39.024219   18059 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0805 22:50:39.523453   18059 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0805 22:50:40.023679   18059 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0805 22:50:40.523482   18059 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0805 22:50:41.023509   18059 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0805 22:50:41.524305   18059 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0805 22:50:42.023450   18059 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0805 22:50:42.523551   18059 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0805 22:50:43.024018   18059 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0805 22:50:43.524237   18059 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0805 22:50:43.676340   18059 kubeadm.go:1113] duration metric: took 13.274933703s to wait for elevateKubeSystemPrivileges
	I0805 22:50:43.676377   18059 kubeadm.go:394] duration metric: took 23.339077634s to StartCluster
	I0805 22:50:43.676396   18059 settings.go:142] acquiring lock: {Name:mkd43028f76794f43f4727efb0b77b9a49886053 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0805 22:50:43.676538   18059 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/19373-9606/kubeconfig
	I0805 22:50:43.676917   18059 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19373-9606/kubeconfig: {Name:mk4481c5dfe578449439dae4abf8681e1b7df535 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0805 22:50:43.677148   18059 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0805 22:50:43.677154   18059 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.39.129 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0805 22:50:43.677203   18059 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:true csi-hostpath-driver:true dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:true gvisor:false headlamp:false helm-tiller:true inaccel:false ingress:true ingress-dns:true inspektor-gadget:true istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:true nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:true registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:true volcano:true volumesnapshots:true yakd:true]
	I0805 22:50:43.677312   18059 addons.go:69] Setting yakd=true in profile "addons-435364"
	I0805 22:50:43.677321   18059 addons.go:69] Setting gcp-auth=true in profile "addons-435364"
	I0805 22:50:43.677319   18059 addons.go:69] Setting inspektor-gadget=true in profile "addons-435364"
	I0805 22:50:43.677342   18059 addons.go:234] Setting addon yakd=true in "addons-435364"
	I0805 22:50:43.677346   18059 mustload.go:65] Loading cluster: addons-435364
	I0805 22:50:43.677356   18059 addons.go:234] Setting addon inspektor-gadget=true in "addons-435364"
	I0805 22:50:43.677355   18059 config.go:182] Loaded profile config "addons-435364": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0805 22:50:43.677364   18059 addons.go:69] Setting ingress=true in profile "addons-435364"
	I0805 22:50:43.677380   18059 addons.go:69] Setting metrics-server=true in profile "addons-435364"
	I0805 22:50:43.677378   18059 addons.go:69] Setting ingress-dns=true in profile "addons-435364"
	I0805 22:50:43.677387   18059 host.go:66] Checking if "addons-435364" exists ...
	I0805 22:50:43.677396   18059 addons.go:234] Setting addon ingress=true in "addons-435364"
	I0805 22:50:43.677401   18059 addons.go:234] Setting addon metrics-server=true in "addons-435364"
	I0805 22:50:43.677401   18059 addons.go:69] Setting helm-tiller=true in profile "addons-435364"
	I0805 22:50:43.677403   18059 addons.go:234] Setting addon ingress-dns=true in "addons-435364"
	I0805 22:50:43.677418   18059 host.go:66] Checking if "addons-435364" exists ...
	I0805 22:50:43.677422   18059 addons.go:234] Setting addon helm-tiller=true in "addons-435364"
	I0805 22:50:43.677432   18059 host.go:66] Checking if "addons-435364" exists ...
	I0805 22:50:43.677435   18059 host.go:66] Checking if "addons-435364" exists ...
	I0805 22:50:43.677439   18059 host.go:66] Checking if "addons-435364" exists ...
	I0805 22:50:43.677569   18059 addons.go:69] Setting csi-hostpath-driver=true in profile "addons-435364"
	I0805 22:50:43.677680   18059 addons.go:234] Setting addon csi-hostpath-driver=true in "addons-435364"
	I0805 22:50:43.677708   18059 host.go:66] Checking if "addons-435364" exists ...
	I0805 22:50:43.677844   18059 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0805 22:50:43.677854   18059 addons.go:69] Setting nvidia-device-plugin=true in profile "addons-435364"
	I0805 22:50:43.677860   18059 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0805 22:50:43.677866   18059 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0805 22:50:43.677880   18059 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0805 22:50:43.677880   18059 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0805 22:50:43.677887   18059 addons.go:234] Setting addon nvidia-device-plugin=true in "addons-435364"
	I0805 22:50:43.677892   18059 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0805 22:50:43.677907   18059 host.go:66] Checking if "addons-435364" exists ...
	I0805 22:50:43.677374   18059 host.go:66] Checking if "addons-435364" exists ...
	I0805 22:50:43.678034   18059 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0805 22:50:43.678055   18059 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0805 22:50:43.678090   18059 config.go:182] Loaded profile config "addons-435364": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0805 22:50:43.678139   18059 addons.go:69] Setting storage-provisioner-rancher=true in profile "addons-435364"
	I0805 22:50:43.678171   18059 addons_storage_classes.go:33] enableOrDisableStorageClasses storage-provisioner-rancher=true on "addons-435364"
	I0805 22:50:43.678251   18059 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0805 22:50:43.678279   18059 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0805 22:50:43.678306   18059 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0805 22:50:43.678323   18059 addons.go:69] Setting default-storageclass=true in profile "addons-435364"
	I0805 22:50:43.678339   18059 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0805 22:50:43.678351   18059 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "addons-435364"
	I0805 22:50:43.678390   18059 addons.go:69] Setting volcano=true in profile "addons-435364"
	I0805 22:50:43.678396   18059 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0805 22:50:43.678414   18059 addons.go:234] Setting addon volcano=true in "addons-435364"
	I0805 22:50:43.678423   18059 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0805 22:50:43.678430   18059 addons.go:69] Setting volumesnapshots=true in profile "addons-435364"
	I0805 22:50:43.678454   18059 addons.go:234] Setting addon volumesnapshots=true in "addons-435364"
	I0805 22:50:43.677845   18059 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0805 22:50:43.678502   18059 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0805 22:50:43.678465   18059 addons.go:69] Setting registry=true in profile "addons-435364"
	I0805 22:50:43.678552   18059 addons.go:234] Setting addon registry=true in "addons-435364"
	I0805 22:50:43.678589   18059 host.go:66] Checking if "addons-435364" exists ...
	I0805 22:50:43.678683   18059 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0805 22:50:43.678706   18059 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0805 22:50:43.678468   18059 addons.go:69] Setting storage-provisioner=true in profile "addons-435364"
	I0805 22:50:43.678113   18059 addons.go:69] Setting cloud-spanner=true in profile "addons-435364"
	I0805 22:50:43.678746   18059 addons.go:234] Setting addon storage-provisioner=true in "addons-435364"
	I0805 22:50:43.678686   18059 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0805 22:50:43.678765   18059 addons.go:234] Setting addon cloud-spanner=true in "addons-435364"
	I0805 22:50:43.678771   18059 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0805 22:50:43.678792   18059 host.go:66] Checking if "addons-435364" exists ...
	I0805 22:50:43.678797   18059 host.go:66] Checking if "addons-435364" exists ...
	I0805 22:50:43.678798   18059 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0805 22:50:43.678933   18059 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0805 22:50:43.678958   18059 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0805 22:50:43.679087   18059 host.go:66] Checking if "addons-435364" exists ...
	I0805 22:50:43.679119   18059 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0805 22:50:43.679139   18059 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0805 22:50:43.679251   18059 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0805 22:50:43.680473   18059 host.go:66] Checking if "addons-435364" exists ...
	I0805 22:50:43.680979   18059 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0805 22:50:43.681049   18059 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0805 22:50:43.683769   18059 out.go:177] * Verifying Kubernetes components...
	I0805 22:50:43.685529   18059 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0805 22:50:43.699503   18059 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42103
	I0805 22:50:43.699912   18059 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38453
	I0805 22:50:43.699936   18059 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34427
	I0805 22:50:43.700080   18059 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33697
	I0805 22:50:43.700080   18059 main.go:141] libmachine: () Calling .GetVersion
	I0805 22:50:43.700317   18059 main.go:141] libmachine: () Calling .GetVersion
	I0805 22:50:43.700390   18059 main.go:141] libmachine: () Calling .GetVersion
	I0805 22:50:43.700856   18059 main.go:141] libmachine: Using API Version  1
	I0805 22:50:43.700875   18059 main.go:141] libmachine: () Calling .SetConfigRaw
	I0805 22:50:43.700918   18059 main.go:141] libmachine: () Calling .GetVersion
	I0805 22:50:43.701029   18059 main.go:141] libmachine: Using API Version  1
	I0805 22:50:43.701042   18059 main.go:141] libmachine: () Calling .SetConfigRaw
	I0805 22:50:43.701180   18059 main.go:141] libmachine: Using API Version  1
	I0805 22:50:43.701203   18059 main.go:141] libmachine: () Calling .SetConfigRaw
	I0805 22:50:43.701382   18059 main.go:141] libmachine: () Calling .GetMachineName
	I0805 22:50:43.701464   18059 main.go:141] libmachine: Using API Version  1
	I0805 22:50:43.701485   18059 main.go:141] libmachine: () Calling .SetConfigRaw
	I0805 22:50:43.701553   18059 main.go:141] libmachine: () Calling .GetMachineName
	I0805 22:50:43.701858   18059 main.go:141] libmachine: () Calling .GetMachineName
	I0805 22:50:43.701875   18059 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0805 22:50:43.701913   18059 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0805 22:50:43.701926   18059 main.go:141] libmachine: () Calling .GetMachineName
	I0805 22:50:43.702223   18059 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0805 22:50:43.702250   18059 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0805 22:50:43.702309   18059 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40125
	I0805 22:50:43.702435   18059 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0805 22:50:43.702470   18059 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0805 22:50:43.702770   18059 main.go:141] libmachine: () Calling .GetVersion
	I0805 22:50:43.703296   18059 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0805 22:50:43.703327   18059 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0805 22:50:43.703461   18059 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0805 22:50:43.703498   18059 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0805 22:50:43.703610   18059 main.go:141] libmachine: Using API Version  1
	I0805 22:50:43.703629   18059 main.go:141] libmachine: () Calling .SetConfigRaw
	I0805 22:50:43.704311   18059 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0805 22:50:43.704335   18059 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0805 22:50:43.707863   18059 main.go:141] libmachine: () Calling .GetMachineName
	I0805 22:50:43.708526   18059 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0805 22:50:43.708566   18059 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0805 22:50:43.728256   18059 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38819
	I0805 22:50:43.729118   18059 main.go:141] libmachine: () Calling .GetVersion
	I0805 22:50:43.729704   18059 main.go:141] libmachine: Using API Version  1
	I0805 22:50:43.729728   18059 main.go:141] libmachine: () Calling .SetConfigRaw
	I0805 22:50:43.730068   18059 main.go:141] libmachine: () Calling .GetMachineName
	I0805 22:50:43.730669   18059 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0805 22:50:43.730708   18059 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0805 22:50:43.737318   18059 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38277
	I0805 22:50:43.737831   18059 main.go:141] libmachine: () Calling .GetVersion
	I0805 22:50:43.738548   18059 main.go:141] libmachine: Using API Version  1
	I0805 22:50:43.738565   18059 main.go:141] libmachine: () Calling .SetConfigRaw
	I0805 22:50:43.738949   18059 main.go:141] libmachine: () Calling .GetMachineName
	I0805 22:50:43.739597   18059 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0805 22:50:43.739635   18059 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0805 22:50:43.739825   18059 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40117
	I0805 22:50:43.741556   18059 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39621
	I0805 22:50:43.741990   18059 main.go:141] libmachine: () Calling .GetVersion
	I0805 22:50:43.742082   18059 main.go:141] libmachine: () Calling .GetVersion
	I0805 22:50:43.742613   18059 main.go:141] libmachine: Using API Version  1
	I0805 22:50:43.742628   18059 main.go:141] libmachine: () Calling .SetConfigRaw
	I0805 22:50:43.742744   18059 main.go:141] libmachine: Using API Version  1
	I0805 22:50:43.742755   18059 main.go:141] libmachine: () Calling .SetConfigRaw
	I0805 22:50:43.743119   18059 main.go:141] libmachine: () Calling .GetMachineName
	I0805 22:50:43.743700   18059 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0805 22:50:43.743734   18059 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0805 22:50:43.743934   18059 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40507
	I0805 22:50:43.744384   18059 main.go:141] libmachine: () Calling .GetVersion
	I0805 22:50:43.744899   18059 main.go:141] libmachine: Using API Version  1
	I0805 22:50:43.744914   18059 main.go:141] libmachine: () Calling .SetConfigRaw
	I0805 22:50:43.744974   18059 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40911
	I0805 22:50:43.745251   18059 main.go:141] libmachine: () Calling .GetMachineName
	I0805 22:50:43.745380   18059 main.go:141] libmachine: () Calling .GetMachineName
	I0805 22:50:43.745518   18059 main.go:141] libmachine: (addons-435364) Calling .GetState
	I0805 22:50:43.745628   18059 main.go:141] libmachine: () Calling .GetVersion
	I0805 22:50:43.746213   18059 main.go:141] libmachine: (addons-435364) Calling .GetState
	I0805 22:50:43.746278   18059 main.go:141] libmachine: Using API Version  1
	I0805 22:50:43.746296   18059 main.go:141] libmachine: () Calling .SetConfigRaw
	I0805 22:50:43.746916   18059 main.go:141] libmachine: () Calling .GetMachineName
	I0805 22:50:43.747960   18059 main.go:141] libmachine: (addons-435364) Calling .GetState
	I0805 22:50:43.749964   18059 addons.go:234] Setting addon default-storageclass=true in "addons-435364"
	I0805 22:50:43.750004   18059 host.go:66] Checking if "addons-435364" exists ...
	I0805 22:50:43.750393   18059 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0805 22:50:43.750433   18059 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0805 22:50:43.750641   18059 main.go:141] libmachine: (addons-435364) Calling .DriverName
	I0805 22:50:43.750658   18059 main.go:141] libmachine: (addons-435364) Calling .DriverName
	I0805 22:50:43.752882   18059 out.go:177]   - Using image docker.io/marcnuri/yakd:0.0.5
	I0805 22:50:43.752882   18059 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-node-driver-registrar:v2.6.0
	I0805 22:50:43.753393   18059 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37129
	I0805 22:50:43.753425   18059 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36175
	I0805 22:50:43.753848   18059 main.go:141] libmachine: () Calling .GetVersion
	I0805 22:50:43.754245   18059 addons.go:431] installing /etc/kubernetes/addons/yakd-ns.yaml
	I0805 22:50:43.754262   18059 ssh_runner.go:362] scp yakd/yakd-ns.yaml --> /etc/kubernetes/addons/yakd-ns.yaml (171 bytes)
	I0805 22:50:43.754280   18059 main.go:141] libmachine: (addons-435364) Calling .GetSSHHostname
	I0805 22:50:43.754365   18059 main.go:141] libmachine: Using API Version  1
	I0805 22:50:43.754380   18059 main.go:141] libmachine: () Calling .SetConfigRaw
	I0805 22:50:43.754673   18059 main.go:141] libmachine: () Calling .GetMachineName
	I0805 22:50:43.754823   18059 main.go:141] libmachine: (addons-435364) Calling .GetState
	I0805 22:50:43.755725   18059 out.go:177]   - Using image registry.k8s.io/sig-storage/hostpathplugin:v1.9.0
	I0805 22:50:43.756766   18059 main.go:141] libmachine: (addons-435364) Calling .DriverName
	I0805 22:50:43.757915   18059 main.go:141] libmachine: (addons-435364) DBG | domain addons-435364 has defined MAC address 52:54:00:99:11:e1 in network mk-addons-435364
	I0805 22:50:43.758280   18059 main.go:141] libmachine: (addons-435364) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:99:11:e1", ip: ""} in network mk-addons-435364: {Iface:virbr1 ExpiryTime:2024-08-05 23:50:03 +0000 UTC Type:0 Mac:52:54:00:99:11:e1 Iaid: IPaddr:192.168.39.129 Prefix:24 Hostname:addons-435364 Clientid:01:52:54:00:99:11:e1}
	I0805 22:50:43.758309   18059 main.go:141] libmachine: (addons-435364) DBG | domain addons-435364 has defined IP address 192.168.39.129 and MAC address 52:54:00:99:11:e1 in network mk-addons-435364
	I0805 22:50:43.758400   18059 out.go:177]   - Using image ghcr.io/inspektor-gadget/inspektor-gadget:v0.30.0
	I0805 22:50:43.758423   18059 main.go:141] libmachine: (addons-435364) Calling .GetSSHPort
	I0805 22:50:43.758460   18059 out.go:177]   - Using image registry.k8s.io/sig-storage/livenessprobe:v2.8.0
	I0805 22:50:43.758499   18059 main.go:141] libmachine: () Calling .GetVersion
	I0805 22:50:43.758631   18059 main.go:141] libmachine: (addons-435364) Calling .GetSSHKeyPath
	I0805 22:50:43.758836   18059 main.go:141] libmachine: (addons-435364) Calling .GetSSHUsername
	I0805 22:50:43.758988   18059 sshutil.go:53] new ssh client: &{IP:192.168.39.129 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19373-9606/.minikube/machines/addons-435364/id_rsa Username:docker}
	I0805 22:50:43.759381   18059 main.go:141] libmachine: Using API Version  1
	I0805 22:50:43.759405   18059 main.go:141] libmachine: () Calling .SetConfigRaw
	I0805 22:50:43.760393   18059 addons.go:431] installing /etc/kubernetes/addons/ig-namespace.yaml
	I0805 22:50:43.760410   18059 ssh_runner.go:362] scp inspektor-gadget/ig-namespace.yaml --> /etc/kubernetes/addons/ig-namespace.yaml (55 bytes)
	I0805 22:50:43.760428   18059 main.go:141] libmachine: (addons-435364) Calling .GetSSHHostname
	I0805 22:50:43.761061   18059 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40707
	I0805 22:50:43.761385   18059 main.go:141] libmachine: () Calling .GetMachineName
	I0805 22:50:43.761519   18059 main.go:141] libmachine: (addons-435364) Calling .GetState
	I0805 22:50:43.761809   18059 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-resizer:v1.6.0
	I0805 22:50:43.763417   18059 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-snapshotter:v6.1.0
	I0805 22:50:43.763418   18059 main.go:141] libmachine: (addons-435364) DBG | domain addons-435364 has defined MAC address 52:54:00:99:11:e1 in network mk-addons-435364
	I0805 22:50:43.764007   18059 main.go:141] libmachine: (addons-435364) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:99:11:e1", ip: ""} in network mk-addons-435364: {Iface:virbr1 ExpiryTime:2024-08-05 23:50:03 +0000 UTC Type:0 Mac:52:54:00:99:11:e1 Iaid: IPaddr:192.168.39.129 Prefix:24 Hostname:addons-435364 Clientid:01:52:54:00:99:11:e1}
	I0805 22:50:43.764031   18059 main.go:141] libmachine: (addons-435364) DBG | domain addons-435364 has defined IP address 192.168.39.129 and MAC address 52:54:00:99:11:e1 in network mk-addons-435364
	I0805 22:50:43.764168   18059 main.go:141] libmachine: (addons-435364) Calling .GetSSHPort
	I0805 22:50:43.764369   18059 main.go:141] libmachine: (addons-435364) Calling .GetSSHKeyPath
	I0805 22:50:43.764486   18059 main.go:141] libmachine: (addons-435364) Calling .GetSSHUsername
	I0805 22:50:43.764595   18059 sshutil.go:53] new ssh client: &{IP:192.168.39.129 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19373-9606/.minikube/machines/addons-435364/id_rsa Username:docker}
	I0805 22:50:43.764890   18059 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36667
	I0805 22:50:43.765110   18059 main.go:141] libmachine: (addons-435364) Calling .DriverName
	I0805 22:50:43.765187   18059 main.go:141] libmachine: () Calling .GetVersion
	I0805 22:50:43.765608   18059 main.go:141] libmachine: Using API Version  1
	I0805 22:50:43.765626   18059 main.go:141] libmachine: () Calling .SetConfigRaw
	I0805 22:50:43.765484   18059 main.go:141] libmachine: () Calling .GetVersion
	I0805 22:50:43.765926   18059 main.go:141] libmachine: () Calling .GetMachineName
	I0805 22:50:43.766063   18059 main.go:141] libmachine: Using API Version  1
	I0805 22:50:43.766080   18059 main.go:141] libmachine: () Calling .SetConfigRaw
	I0805 22:50:43.766110   18059 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-provisioner:v3.3.0
	I0805 22:50:43.766606   18059 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0805 22:50:43.766995   18059 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0805 22:50:43.766634   18059 main.go:141] libmachine: () Calling .GetMachineName
	I0805 22:50:43.767290   18059 out.go:177]   - Using image ghcr.io/helm/tiller:v2.17.0
	I0805 22:50:43.767801   18059 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40035
	I0805 22:50:43.767645   18059 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0805 22:50:43.767851   18059 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0805 22:50:43.768120   18059 main.go:141] libmachine: () Calling .GetVersion
	I0805 22:50:43.768556   18059 main.go:141] libmachine: Using API Version  1
	I0805 22:50:43.768735   18059 main.go:141] libmachine: () Calling .SetConfigRaw
	I0805 22:50:43.769120   18059 main.go:141] libmachine: () Calling .GetMachineName
	I0805 22:50:43.769349   18059 addons.go:431] installing /etc/kubernetes/addons/helm-tiller-dp.yaml
	I0805 22:50:43.769366   18059 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/helm-tiller-dp.yaml (2422 bytes)
	I0805 22:50:43.769383   18059 main.go:141] libmachine: (addons-435364) Calling .GetSSHHostname
	I0805 22:50:43.769661   18059 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0805 22:50:43.769696   18059 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0805 22:50:43.770097   18059 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-attacher:v4.0.0
	I0805 22:50:43.771676   18059 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-external-health-monitor-controller:v0.7.0
	I0805 22:50:43.772932   18059 addons.go:431] installing /etc/kubernetes/addons/rbac-external-attacher.yaml
	I0805 22:50:43.772949   18059 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-attacher.yaml --> /etc/kubernetes/addons/rbac-external-attacher.yaml (3073 bytes)
	I0805 22:50:43.772977   18059 main.go:141] libmachine: (addons-435364) Calling .GetSSHHostname
	I0805 22:50:43.772998   18059 main.go:141] libmachine: (addons-435364) DBG | domain addons-435364 has defined MAC address 52:54:00:99:11:e1 in network mk-addons-435364
	I0805 22:50:43.773558   18059 main.go:141] libmachine: (addons-435364) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:99:11:e1", ip: ""} in network mk-addons-435364: {Iface:virbr1 ExpiryTime:2024-08-05 23:50:03 +0000 UTC Type:0 Mac:52:54:00:99:11:e1 Iaid: IPaddr:192.168.39.129 Prefix:24 Hostname:addons-435364 Clientid:01:52:54:00:99:11:e1}
	I0805 22:50:43.773580   18059 main.go:141] libmachine: (addons-435364) DBG | domain addons-435364 has defined IP address 192.168.39.129 and MAC address 52:54:00:99:11:e1 in network mk-addons-435364
	I0805 22:50:43.773749   18059 main.go:141] libmachine: (addons-435364) Calling .GetSSHPort
	I0805 22:50:43.774199   18059 main.go:141] libmachine: (addons-435364) Calling .GetSSHKeyPath
	I0805 22:50:43.774421   18059 main.go:141] libmachine: (addons-435364) Calling .GetSSHUsername
	I0805 22:50:43.774668   18059 sshutil.go:53] new ssh client: &{IP:192.168.39.129 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19373-9606/.minikube/machines/addons-435364/id_rsa Username:docker}
	I0805 22:50:43.776037   18059 main.go:141] libmachine: (addons-435364) DBG | domain addons-435364 has defined MAC address 52:54:00:99:11:e1 in network mk-addons-435364
	I0805 22:50:43.776424   18059 main.go:141] libmachine: (addons-435364) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:99:11:e1", ip: ""} in network mk-addons-435364: {Iface:virbr1 ExpiryTime:2024-08-05 23:50:03 +0000 UTC Type:0 Mac:52:54:00:99:11:e1 Iaid: IPaddr:192.168.39.129 Prefix:24 Hostname:addons-435364 Clientid:01:52:54:00:99:11:e1}
	I0805 22:50:43.776449   18059 main.go:141] libmachine: (addons-435364) DBG | domain addons-435364 has defined IP address 192.168.39.129 and MAC address 52:54:00:99:11:e1 in network mk-addons-435364
	I0805 22:50:43.776684   18059 main.go:141] libmachine: (addons-435364) Calling .GetSSHPort
	I0805 22:50:43.776889   18059 main.go:141] libmachine: (addons-435364) Calling .GetSSHKeyPath
	I0805 22:50:43.777077   18059 main.go:141] libmachine: (addons-435364) Calling .GetSSHUsername
	I0805 22:50:43.777276   18059 sshutil.go:53] new ssh client: &{IP:192.168.39.129 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19373-9606/.minikube/machines/addons-435364/id_rsa Username:docker}
	I0805 22:50:43.780267   18059 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39577
	I0805 22:50:43.780742   18059 main.go:141] libmachine: () Calling .GetVersion
	I0805 22:50:43.781148   18059 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45257
	I0805 22:50:43.781255   18059 main.go:141] libmachine: Using API Version  1
	I0805 22:50:43.781270   18059 main.go:141] libmachine: () Calling .SetConfigRaw
	I0805 22:50:43.781455   18059 main.go:141] libmachine: () Calling .GetVersion
	I0805 22:50:43.781544   18059 main.go:141] libmachine: () Calling .GetMachineName
	I0805 22:50:43.781985   18059 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0805 22:50:43.782011   18059 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0805 22:50:43.782186   18059 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45863
	I0805 22:50:43.782377   18059 main.go:141] libmachine: Using API Version  1
	I0805 22:50:43.782389   18059 main.go:141] libmachine: () Calling .SetConfigRaw
	I0805 22:50:43.782661   18059 main.go:141] libmachine: () Calling .GetVersion
	I0805 22:50:43.782844   18059 main.go:141] libmachine: () Calling .GetMachineName
	I0805 22:50:43.783217   18059 main.go:141] libmachine: Using API Version  1
	I0805 22:50:43.783232   18059 main.go:141] libmachine: () Calling .SetConfigRaw
	I0805 22:50:43.785914   18059 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33073
	I0805 22:50:43.786450   18059 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0805 22:50:43.786482   18059 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0805 22:50:43.786769   18059 main.go:141] libmachine: () Calling .GetMachineName
	I0805 22:50:43.787132   18059 main.go:141] libmachine: () Calling .GetVersion
	I0805 22:50:43.787466   18059 main.go:141] libmachine: (addons-435364) Calling .GetState
	I0805 22:50:43.787597   18059 main.go:141] libmachine: Using API Version  1
	I0805 22:50:43.787619   18059 main.go:141] libmachine: () Calling .SetConfigRaw
	I0805 22:50:43.787922   18059 main.go:141] libmachine: () Calling .GetMachineName
	I0805 22:50:43.788104   18059 main.go:141] libmachine: (addons-435364) Calling .GetState
	I0805 22:50:43.790373   18059 main.go:141] libmachine: (addons-435364) Calling .DriverName
	I0805 22:50:43.790979   18059 main.go:141] libmachine: (addons-435364) Calling .DriverName
	I0805 22:50:43.792963   18059 out.go:177]   - Using image gcr.io/k8s-minikube/minikube-ingress-dns:0.0.3
	I0805 22:50:43.793021   18059 out.go:177]   - Using image registry.k8s.io/metrics-server/metrics-server:v0.7.1
	I0805 22:50:43.793244   18059 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34269
	I0805 22:50:43.793375   18059 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36011
	I0805 22:50:43.793767   18059 main.go:141] libmachine: () Calling .GetVersion
	I0805 22:50:43.793809   18059 main.go:141] libmachine: () Calling .GetVersion
	I0805 22:50:43.793942   18059 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46027
	I0805 22:50:43.794257   18059 main.go:141] libmachine: Using API Version  1
	I0805 22:50:43.794274   18059 main.go:141] libmachine: () Calling .SetConfigRaw
	I0805 22:50:43.794336   18059 main.go:141] libmachine: () Calling .GetVersion
	I0805 22:50:43.794750   18059 main.go:141] libmachine: Using API Version  1
	I0805 22:50:43.794766   18059 main.go:141] libmachine: () Calling .SetConfigRaw
	I0805 22:50:43.794969   18059 main.go:141] libmachine: () Calling .GetMachineName
	I0805 22:50:43.795101   18059 main.go:141] libmachine: () Calling .GetMachineName
	I0805 22:50:43.795237   18059 addons.go:431] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0805 22:50:43.795252   18059 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0805 22:50:43.795265   18059 main.go:141] libmachine: (addons-435364) Calling .GetState
	I0805 22:50:43.795269   18059 main.go:141] libmachine: (addons-435364) Calling .GetSSHHostname
	I0805 22:50:43.795321   18059 main.go:141] libmachine: (addons-435364) Calling .GetState
	I0805 22:50:43.795419   18059 addons.go:431] installing /etc/kubernetes/addons/ingress-dns-pod.yaml
	I0805 22:50:43.795435   18059 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-dns-pod.yaml (2442 bytes)
	I0805 22:50:43.795451   18059 main.go:141] libmachine: (addons-435364) Calling .GetSSHHostname
	I0805 22:50:43.795958   18059 main.go:141] libmachine: Using API Version  1
	I0805 22:50:43.795978   18059 main.go:141] libmachine: () Calling .SetConfigRaw
	I0805 22:50:43.797016   18059 main.go:141] libmachine: () Calling .GetMachineName
	I0805 22:50:43.797635   18059 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0805 22:50:43.797674   18059 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0805 22:50:43.797884   18059 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42629
	I0805 22:50:43.797895   18059 main.go:141] libmachine: (addons-435364) Calling .DriverName
	I0805 22:50:43.798406   18059 main.go:141] libmachine: () Calling .GetVersion
	I0805 22:50:43.799592   18059 addons.go:234] Setting addon storage-provisioner-rancher=true in "addons-435364"
	I0805 22:50:43.799635   18059 host.go:66] Checking if "addons-435364" exists ...
	I0805 22:50:43.799797   18059 out.go:177]   - Using image registry.k8s.io/ingress-nginx/controller:v1.11.1
	I0805 22:50:43.800016   18059 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0805 22:50:43.800046   18059 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0805 22:50:43.800233   18059 main.go:141] libmachine: (addons-435364) DBG | domain addons-435364 has defined MAC address 52:54:00:99:11:e1 in network mk-addons-435364
	I0805 22:50:43.800344   18059 main.go:141] libmachine: Using API Version  1
	I0805 22:50:43.800355   18059 main.go:141] libmachine: () Calling .SetConfigRaw
	I0805 22:50:43.800753   18059 main.go:141] libmachine: () Calling .GetMachineName
	I0805 22:50:43.800815   18059 main.go:141] libmachine: (addons-435364) DBG | domain addons-435364 has defined MAC address 52:54:00:99:11:e1 in network mk-addons-435364
	I0805 22:50:43.800887   18059 main.go:141] libmachine: (addons-435364) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:99:11:e1", ip: ""} in network mk-addons-435364: {Iface:virbr1 ExpiryTime:2024-08-05 23:50:03 +0000 UTC Type:0 Mac:52:54:00:99:11:e1 Iaid: IPaddr:192.168.39.129 Prefix:24 Hostname:addons-435364 Clientid:01:52:54:00:99:11:e1}
	I0805 22:50:43.800904   18059 main.go:141] libmachine: (addons-435364) DBG | domain addons-435364 has defined IP address 192.168.39.129 and MAC address 52:54:00:99:11:e1 in network mk-addons-435364
	I0805 22:50:43.801073   18059 main.go:141] libmachine: (addons-435364) Calling .GetState
	I0805 22:50:43.801116   18059 main.go:141] libmachine: (addons-435364) Calling .GetSSHPort
	I0805 22:50:43.801323   18059 main.go:141] libmachine: (addons-435364) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:99:11:e1", ip: ""} in network mk-addons-435364: {Iface:virbr1 ExpiryTime:2024-08-05 23:50:03 +0000 UTC Type:0 Mac:52:54:00:99:11:e1 Iaid: IPaddr:192.168.39.129 Prefix:24 Hostname:addons-435364 Clientid:01:52:54:00:99:11:e1}
	I0805 22:50:43.801341   18059 main.go:141] libmachine: (addons-435364) DBG | domain addons-435364 has defined IP address 192.168.39.129 and MAC address 52:54:00:99:11:e1 in network mk-addons-435364
	I0805 22:50:43.801372   18059 main.go:141] libmachine: (addons-435364) Calling .GetSSHKeyPath
	I0805 22:50:43.801504   18059 main.go:141] libmachine: (addons-435364) Calling .GetSSHUsername
	I0805 22:50:43.801550   18059 main.go:141] libmachine: (addons-435364) Calling .GetSSHPort
	I0805 22:50:43.801705   18059 sshutil.go:53] new ssh client: &{IP:192.168.39.129 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19373-9606/.minikube/machines/addons-435364/id_rsa Username:docker}
	I0805 22:50:43.801906   18059 main.go:141] libmachine: (addons-435364) Calling .GetSSHKeyPath
	I0805 22:50:43.802057   18059 main.go:141] libmachine: (addons-435364) Calling .GetSSHUsername
	I0805 22:50:43.802238   18059 sshutil.go:53] new ssh client: &{IP:192.168.39.129 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19373-9606/.minikube/machines/addons-435364/id_rsa Username:docker}
	I0805 22:50:43.802565   18059 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.4.1
	I0805 22:50:43.803280   18059 main.go:141] libmachine: (addons-435364) Calling .DriverName
	I0805 22:50:43.806469   18059 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44719
	I0805 22:50:43.806938   18059 main.go:141] libmachine: () Calling .GetVersion
	I0805 22:50:43.807487   18059 main.go:141] libmachine: Using API Version  1
	I0805 22:50:43.807502   18059 main.go:141] libmachine: () Calling .SetConfigRaw
	I0805 22:50:43.807552   18059 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0805 22:50:43.807647   18059 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.4.1
	I0805 22:50:43.807910   18059 main.go:141] libmachine: () Calling .GetMachineName
	I0805 22:50:43.808169   18059 main.go:141] libmachine: (addons-435364) Calling .GetState
	I0805 22:50:43.809922   18059 addons.go:431] installing /etc/kubernetes/addons/ingress-deploy.yaml
	I0805 22:50:43.809941   18059 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-deploy.yaml (16078 bytes)
	I0805 22:50:43.809960   18059 main.go:141] libmachine: (addons-435364) Calling .GetSSHHostname
	I0805 22:50:43.810646   18059 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0805 22:50:43.810659   18059 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0805 22:50:43.810676   18059 main.go:141] libmachine: (addons-435364) Calling .GetSSHHostname
	I0805 22:50:43.811182   18059 main.go:141] libmachine: (addons-435364) Calling .DriverName
	I0805 22:50:43.813078   18059 out.go:177]   - Using image docker.io/registry:2.8.3
	I0805 22:50:43.814389   18059 main.go:141] libmachine: (addons-435364) DBG | domain addons-435364 has defined MAC address 52:54:00:99:11:e1 in network mk-addons-435364
	I0805 22:50:43.814414   18059 main.go:141] libmachine: (addons-435364) DBG | domain addons-435364 has defined MAC address 52:54:00:99:11:e1 in network mk-addons-435364
	I0805 22:50:43.814845   18059 main.go:141] libmachine: (addons-435364) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:99:11:e1", ip: ""} in network mk-addons-435364: {Iface:virbr1 ExpiryTime:2024-08-05 23:50:03 +0000 UTC Type:0 Mac:52:54:00:99:11:e1 Iaid: IPaddr:192.168.39.129 Prefix:24 Hostname:addons-435364 Clientid:01:52:54:00:99:11:e1}
	I0805 22:50:43.814865   18059 main.go:141] libmachine: (addons-435364) DBG | domain addons-435364 has defined IP address 192.168.39.129 and MAC address 52:54:00:99:11:e1 in network mk-addons-435364
	I0805 22:50:43.814896   18059 main.go:141] libmachine: (addons-435364) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:99:11:e1", ip: ""} in network mk-addons-435364: {Iface:virbr1 ExpiryTime:2024-08-05 23:50:03 +0000 UTC Type:0 Mac:52:54:00:99:11:e1 Iaid: IPaddr:192.168.39.129 Prefix:24 Hostname:addons-435364 Clientid:01:52:54:00:99:11:e1}
	I0805 22:50:43.814910   18059 main.go:141] libmachine: (addons-435364) DBG | domain addons-435364 has defined IP address 192.168.39.129 and MAC address 52:54:00:99:11:e1 in network mk-addons-435364
	I0805 22:50:43.815393   18059 main.go:141] libmachine: (addons-435364) Calling .GetSSHPort
	I0805 22:50:43.815463   18059 main.go:141] libmachine: (addons-435364) Calling .GetSSHPort
	I0805 22:50:43.815644   18059 main.go:141] libmachine: (addons-435364) Calling .GetSSHKeyPath
	I0805 22:50:43.815699   18059 main.go:141] libmachine: (addons-435364) Calling .GetSSHKeyPath
	I0805 22:50:43.815738   18059 main.go:141] libmachine: (addons-435364) Calling .GetSSHUsername
	I0805 22:50:43.815818   18059 sshutil.go:53] new ssh client: &{IP:192.168.39.129 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19373-9606/.minikube/machines/addons-435364/id_rsa Username:docker}
	I0805 22:50:43.816123   18059 main.go:141] libmachine: (addons-435364) Calling .GetSSHUsername
	I0805 22:50:43.816245   18059 sshutil.go:53] new ssh client: &{IP:192.168.39.129 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19373-9606/.minikube/machines/addons-435364/id_rsa Username:docker}
	I0805 22:50:43.816685   18059 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45191
	I0805 22:50:43.817236   18059 main.go:141] libmachine: () Calling .GetVersion
	I0805 22:50:43.817683   18059 out.go:177]   - Using image gcr.io/k8s-minikube/kube-registry-proxy:0.0.6
	I0805 22:50:43.817860   18059 main.go:141] libmachine: Using API Version  1
	I0805 22:50:43.817875   18059 main.go:141] libmachine: () Calling .SetConfigRaw
	I0805 22:50:43.818299   18059 main.go:141] libmachine: () Calling .GetMachineName
	I0805 22:50:43.818442   18059 main.go:141] libmachine: (addons-435364) Calling .GetState
	I0805 22:50:43.818839   18059 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38631
	I0805 22:50:43.819119   18059 addons.go:431] installing /etc/kubernetes/addons/registry-rc.yaml
	I0805 22:50:43.819139   18059 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-rc.yaml (860 bytes)
	I0805 22:50:43.819156   18059 main.go:141] libmachine: (addons-435364) Calling .GetSSHHostname
	I0805 22:50:43.819332   18059 main.go:141] libmachine: () Calling .GetVersion
	I0805 22:50:43.819932   18059 main.go:141] libmachine: Using API Version  1
	I0805 22:50:43.819948   18059 main.go:141] libmachine: () Calling .SetConfigRaw
	I0805 22:50:43.820304   18059 main.go:141] libmachine: () Calling .GetMachineName
	I0805 22:50:43.820500   18059 main.go:141] libmachine: (addons-435364) Calling .GetState
	I0805 22:50:43.821471   18059 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42483
	I0805 22:50:43.821859   18059 main.go:141] libmachine: () Calling .GetVersion
	I0805 22:50:43.822579   18059 main.go:141] libmachine: (addons-435364) Calling .DriverName
	I0805 22:50:43.822631   18059 main.go:141] libmachine: (addons-435364) DBG | domain addons-435364 has defined MAC address 52:54:00:99:11:e1 in network mk-addons-435364
	I0805 22:50:43.823451   18059 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I0805 22:50:43.823465   18059 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0805 22:50:43.823482   18059 main.go:141] libmachine: (addons-435364) Calling .GetSSHHostname
	I0805 22:50:43.823619   18059 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44527
	I0805 22:50:43.823919   18059 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41149
	I0805 22:50:43.824082   18059 main.go:141] libmachine: () Calling .GetVersion
	I0805 22:50:43.824300   18059 main.go:141] libmachine: Using API Version  1
	I0805 22:50:43.824315   18059 main.go:141] libmachine: () Calling .SetConfigRaw
	I0805 22:50:43.824507   18059 main.go:141] libmachine: (addons-435364) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:99:11:e1", ip: ""} in network mk-addons-435364: {Iface:virbr1 ExpiryTime:2024-08-05 23:50:03 +0000 UTC Type:0 Mac:52:54:00:99:11:e1 Iaid: IPaddr:192.168.39.129 Prefix:24 Hostname:addons-435364 Clientid:01:52:54:00:99:11:e1}
	I0805 22:50:43.824526   18059 main.go:141] libmachine: (addons-435364) DBG | domain addons-435364 has defined IP address 192.168.39.129 and MAC address 52:54:00:99:11:e1 in network mk-addons-435364
	I0805 22:50:43.824560   18059 main.go:141] libmachine: (addons-435364) Calling .DriverName
	I0805 22:50:43.824946   18059 main.go:141] libmachine: (addons-435364) Calling .GetSSHPort
	I0805 22:50:43.825036   18059 main.go:141] libmachine: () Calling .GetVersion
	I0805 22:50:43.824957   18059 main.go:141] libmachine: () Calling .GetMachineName
	I0805 22:50:43.824987   18059 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35483
	I0805 22:50:43.825257   18059 main.go:141] libmachine: (addons-435364) Calling .GetSSHKeyPath
	I0805 22:50:43.825365   18059 main.go:141] libmachine: (addons-435364) Calling .GetSSHUsername
	I0805 22:50:43.825474   18059 main.go:141] libmachine: Using API Version  1
	I0805 22:50:43.825482   18059 sshutil.go:53] new ssh client: &{IP:192.168.39.129 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19373-9606/.minikube/machines/addons-435364/id_rsa Username:docker}
	I0805 22:50:43.825493   18059 main.go:141] libmachine: () Calling .SetConfigRaw
	I0805 22:50:43.825612   18059 main.go:141] libmachine: (addons-435364) Calling .GetState
	I0805 22:50:43.825782   18059 main.go:141] libmachine: () Calling .GetMachineName
	I0805 22:50:43.825938   18059 main.go:141] libmachine: (addons-435364) Calling .GetState
	I0805 22:50:43.826159   18059 main.go:141] libmachine: Using API Version  1
	I0805 22:50:43.826177   18059 main.go:141] libmachine: () Calling .SetConfigRaw
	I0805 22:50:43.826252   18059 main.go:141] libmachine: () Calling .GetVersion
	I0805 22:50:43.826722   18059 main.go:141] libmachine: () Calling .GetMachineName
	I0805 22:50:43.826911   18059 main.go:141] libmachine: (addons-435364) Calling .GetState
	I0805 22:50:43.827152   18059 out.go:177]   - Using image registry.k8s.io/sig-storage/snapshot-controller:v6.1.0
	I0805 22:50:43.827533   18059 main.go:141] libmachine: (addons-435364) Calling .DriverName
	I0805 22:50:43.827621   18059 main.go:141] libmachine: Using API Version  1
	I0805 22:50:43.827635   18059 main.go:141] libmachine: () Calling .SetConfigRaw
	I0805 22:50:43.828069   18059 main.go:141] libmachine: () Calling .GetMachineName
	I0805 22:50:43.828111   18059 main.go:141] libmachine: (addons-435364) Calling .DriverName
	I0805 22:50:43.828235   18059 main.go:141] libmachine: (addons-435364) DBG | domain addons-435364 has defined MAC address 52:54:00:99:11:e1 in network mk-addons-435364
	I0805 22:50:43.828440   18059 main.go:141] libmachine: (addons-435364) Calling .GetState
	I0805 22:50:43.828698   18059 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml
	I0805 22:50:43.828713   18059 ssh_runner.go:362] scp volumesnapshots/csi-hostpath-snapshotclass.yaml --> /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml (934 bytes)
	I0805 22:50:43.828730   18059 main.go:141] libmachine: (addons-435364) Calling .GetSSHHostname
	I0805 22:50:43.828823   18059 main.go:141] libmachine: (addons-435364) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:99:11:e1", ip: ""} in network mk-addons-435364: {Iface:virbr1 ExpiryTime:2024-08-05 23:50:03 +0000 UTC Type:0 Mac:52:54:00:99:11:e1 Iaid: IPaddr:192.168.39.129 Prefix:24 Hostname:addons-435364 Clientid:01:52:54:00:99:11:e1}
	I0805 22:50:43.828844   18059 main.go:141] libmachine: (addons-435364) DBG | domain addons-435364 has defined IP address 192.168.39.129 and MAC address 52:54:00:99:11:e1 in network mk-addons-435364
	I0805 22:50:43.828868   18059 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45717
	I0805 22:50:43.828956   18059 main.go:141] libmachine: (addons-435364) Calling .DriverName
	I0805 22:50:43.829343   18059 main.go:141] libmachine: Making call to close driver server
	I0805 22:50:43.829358   18059 main.go:141] libmachine: (addons-435364) Calling .Close
	I0805 22:50:43.829435   18059 main.go:141] libmachine: (addons-435364) Calling .GetSSHPort
	I0805 22:50:43.829607   18059 main.go:141] libmachine: (addons-435364) DBG | Closing plugin on server side
	I0805 22:50:43.829628   18059 main.go:141] libmachine: (addons-435364) Calling .GetSSHKeyPath
	I0805 22:50:43.829704   18059 main.go:141] libmachine: Successfully made call to close driver server
	I0805 22:50:43.829716   18059 main.go:141] libmachine: Making call to close connection to plugin binary
	I0805 22:50:43.829729   18059 main.go:141] libmachine: Making call to close driver server
	I0805 22:50:43.829736   18059 main.go:141] libmachine: (addons-435364) Calling .Close
	I0805 22:50:43.829791   18059 main.go:141] libmachine: (addons-435364) Calling .GetSSHUsername
	I0805 22:50:43.829934   18059 main.go:141] libmachine: (addons-435364) DBG | Closing plugin on server side
	I0805 22:50:43.829960   18059 sshutil.go:53] new ssh client: &{IP:192.168.39.129 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19373-9606/.minikube/machines/addons-435364/id_rsa Username:docker}
	I0805 22:50:43.830197   18059 main.go:141] libmachine: Successfully made call to close driver server
	I0805 22:50:43.830208   18059 main.go:141] libmachine: Making call to close connection to plugin binary
	W0805 22:50:43.830281   18059 out.go:239] ! Enabling 'volcano' returned an error: running callbacks: [volcano addon does not support crio]
	I0805 22:50:43.830382   18059 host.go:66] Checking if "addons-435364" exists ...
	I0805 22:50:43.830628   18059 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0805 22:50:43.830640   18059 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0805 22:50:43.830764   18059 out.go:177]   - Using image nvcr.io/nvidia/k8s-device-plugin:v0.16.1
	I0805 22:50:43.831096   18059 out.go:177]   - Using image gcr.io/cloud-spanner-emulator/emulator:1.5.22
	I0805 22:50:43.831741   18059 main.go:141] libmachine: (addons-435364) DBG | domain addons-435364 has defined MAC address 52:54:00:99:11:e1 in network mk-addons-435364
	I0805 22:50:43.832145   18059 main.go:141] libmachine: (addons-435364) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:99:11:e1", ip: ""} in network mk-addons-435364: {Iface:virbr1 ExpiryTime:2024-08-05 23:50:03 +0000 UTC Type:0 Mac:52:54:00:99:11:e1 Iaid: IPaddr:192.168.39.129 Prefix:24 Hostname:addons-435364 Clientid:01:52:54:00:99:11:e1}
	I0805 22:50:43.832165   18059 main.go:141] libmachine: (addons-435364) DBG | domain addons-435364 has defined IP address 192.168.39.129 and MAC address 52:54:00:99:11:e1 in network mk-addons-435364
	I0805 22:50:43.832312   18059 main.go:141] libmachine: (addons-435364) Calling .GetSSHPort
	I0805 22:50:43.832451   18059 main.go:141] libmachine: (addons-435364) Calling .GetSSHKeyPath
	I0805 22:50:43.832557   18059 main.go:141] libmachine: (addons-435364) Calling .GetSSHUsername
	I0805 22:50:43.832692   18059 sshutil.go:53] new ssh client: &{IP:192.168.39.129 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19373-9606/.minikube/machines/addons-435364/id_rsa Username:docker}
	I0805 22:50:43.832792   18059 addons.go:431] installing /etc/kubernetes/addons/deployment.yaml
	I0805 22:50:43.832801   18059 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/deployment.yaml (1004 bytes)
	I0805 22:50:43.832815   18059 main.go:141] libmachine: (addons-435364) Calling .GetSSHHostname
	I0805 22:50:43.832886   18059 main.go:141] libmachine: () Calling .GetVersion
	I0805 22:50:43.833013   18059 addons.go:431] installing /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I0805 22:50:43.833024   18059 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/nvidia-device-plugin.yaml (1966 bytes)
	I0805 22:50:43.833038   18059 main.go:141] libmachine: (addons-435364) Calling .GetSSHHostname
	I0805 22:50:43.833450   18059 main.go:141] libmachine: Using API Version  1
	I0805 22:50:43.833460   18059 main.go:141] libmachine: () Calling .SetConfigRaw
	I0805 22:50:43.833989   18059 main.go:141] libmachine: () Calling .GetMachineName
	I0805 22:50:43.834541   18059 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0805 22:50:43.834558   18059 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0805 22:50:43.835831   18059 main.go:141] libmachine: (addons-435364) DBG | domain addons-435364 has defined MAC address 52:54:00:99:11:e1 in network mk-addons-435364
	I0805 22:50:43.836041   18059 main.go:141] libmachine: (addons-435364) DBG | domain addons-435364 has defined MAC address 52:54:00:99:11:e1 in network mk-addons-435364
	I0805 22:50:43.836173   18059 main.go:141] libmachine: (addons-435364) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:99:11:e1", ip: ""} in network mk-addons-435364: {Iface:virbr1 ExpiryTime:2024-08-05 23:50:03 +0000 UTC Type:0 Mac:52:54:00:99:11:e1 Iaid: IPaddr:192.168.39.129 Prefix:24 Hostname:addons-435364 Clientid:01:52:54:00:99:11:e1}
	I0805 22:50:43.836196   18059 main.go:141] libmachine: (addons-435364) DBG | domain addons-435364 has defined IP address 192.168.39.129 and MAC address 52:54:00:99:11:e1 in network mk-addons-435364
	I0805 22:50:43.836309   18059 main.go:141] libmachine: (addons-435364) Calling .GetSSHPort
	I0805 22:50:43.836468   18059 main.go:141] libmachine: (addons-435364) Calling .GetSSHKeyPath
	I0805 22:50:43.836517   18059 main.go:141] libmachine: (addons-435364) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:99:11:e1", ip: ""} in network mk-addons-435364: {Iface:virbr1 ExpiryTime:2024-08-05 23:50:03 +0000 UTC Type:0 Mac:52:54:00:99:11:e1 Iaid: IPaddr:192.168.39.129 Prefix:24 Hostname:addons-435364 Clientid:01:52:54:00:99:11:e1}
	I0805 22:50:43.836527   18059 main.go:141] libmachine: (addons-435364) DBG | domain addons-435364 has defined IP address 192.168.39.129 and MAC address 52:54:00:99:11:e1 in network mk-addons-435364
	I0805 22:50:43.836599   18059 main.go:141] libmachine: (addons-435364) Calling .GetSSHUsername
	I0805 22:50:43.836740   18059 sshutil.go:53] new ssh client: &{IP:192.168.39.129 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19373-9606/.minikube/machines/addons-435364/id_rsa Username:docker}
	I0805 22:50:43.836758   18059 main.go:141] libmachine: (addons-435364) Calling .GetSSHPort
	I0805 22:50:43.836877   18059 main.go:141] libmachine: (addons-435364) Calling .GetSSHKeyPath
	I0805 22:50:43.836983   18059 main.go:141] libmachine: (addons-435364) Calling .GetSSHUsername
	I0805 22:50:43.837089   18059 sshutil.go:53] new ssh client: &{IP:192.168.39.129 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19373-9606/.minikube/machines/addons-435364/id_rsa Username:docker}
	W0805 22:50:43.860255   18059 sshutil.go:64] dial failure (will retry): ssh: handshake failed: read tcp 192.168.39.1:39886->192.168.39.129:22: read: connection reset by peer
	I0805 22:50:43.860282   18059 retry.go:31] will retry after 258.36716ms: ssh: handshake failed: read tcp 192.168.39.1:39886->192.168.39.129:22: read: connection reset by peer
	I0805 22:50:43.874890   18059 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39235
	I0805 22:50:43.874892   18059 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45691
	I0805 22:50:43.875392   18059 main.go:141] libmachine: () Calling .GetVersion
	I0805 22:50:43.875474   18059 main.go:141] libmachine: () Calling .GetVersion
	I0805 22:50:43.875934   18059 main.go:141] libmachine: Using API Version  1
	I0805 22:50:43.875954   18059 main.go:141] libmachine: () Calling .SetConfigRaw
	I0805 22:50:43.876047   18059 main.go:141] libmachine: Using API Version  1
	I0805 22:50:43.876065   18059 main.go:141] libmachine: () Calling .SetConfigRaw
	I0805 22:50:43.876355   18059 main.go:141] libmachine: () Calling .GetMachineName
	I0805 22:50:43.876394   18059 main.go:141] libmachine: () Calling .GetMachineName
	I0805 22:50:43.876512   18059 main.go:141] libmachine: (addons-435364) Calling .DriverName
	I0805 22:50:43.876564   18059 main.go:141] libmachine: (addons-435364) Calling .GetState
	I0805 22:50:43.878236   18059 main.go:141] libmachine: (addons-435364) Calling .DriverName
	I0805 22:50:43.880181   18059 out.go:177]   - Using image docker.io/rancher/local-path-provisioner:v0.0.22
	I0805 22:50:43.881864   18059 out.go:177]   - Using image docker.io/busybox:stable
	I0805 22:50:43.883245   18059 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I0805 22:50:43.883262   18059 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner-rancher.yaml (3113 bytes)
	I0805 22:50:43.883279   18059 main.go:141] libmachine: (addons-435364) Calling .GetSSHHostname
	I0805 22:50:43.885681   18059 main.go:141] libmachine: (addons-435364) DBG | domain addons-435364 has defined MAC address 52:54:00:99:11:e1 in network mk-addons-435364
	I0805 22:50:43.886032   18059 main.go:141] libmachine: (addons-435364) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:99:11:e1", ip: ""} in network mk-addons-435364: {Iface:virbr1 ExpiryTime:2024-08-05 23:50:03 +0000 UTC Type:0 Mac:52:54:00:99:11:e1 Iaid: IPaddr:192.168.39.129 Prefix:24 Hostname:addons-435364 Clientid:01:52:54:00:99:11:e1}
	I0805 22:50:43.886053   18059 main.go:141] libmachine: (addons-435364) DBG | domain addons-435364 has defined IP address 192.168.39.129 and MAC address 52:54:00:99:11:e1 in network mk-addons-435364
	I0805 22:50:43.886214   18059 main.go:141] libmachine: (addons-435364) Calling .GetSSHPort
	I0805 22:50:43.886413   18059 main.go:141] libmachine: (addons-435364) Calling .GetSSHKeyPath
	I0805 22:50:43.886693   18059 main.go:141] libmachine: (addons-435364) Calling .GetSSHUsername
	I0805 22:50:43.886839   18059 sshutil.go:53] new ssh client: &{IP:192.168.39.129 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19373-9606/.minikube/machines/addons-435364/id_rsa Username:docker}
	I0805 22:50:44.179211   18059 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0805 22:50:44.179287   18059 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.39.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.30.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0805 22:50:44.318662   18059 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I0805 22:50:44.351872   18059 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0805 22:50:44.373044   18059 addons.go:431] installing /etc/kubernetes/addons/helm-tiller-rbac.yaml
	I0805 22:50:44.373068   18059 ssh_runner.go:362] scp helm-tiller/helm-tiller-rbac.yaml --> /etc/kubernetes/addons/helm-tiller-rbac.yaml (1188 bytes)
	I0805 22:50:44.375292   18059 addons.go:431] installing /etc/kubernetes/addons/yakd-sa.yaml
	I0805 22:50:44.375306   18059 ssh_runner.go:362] scp yakd/yakd-sa.yaml --> /etc/kubernetes/addons/yakd-sa.yaml (247 bytes)
	I0805 22:50:44.377480   18059 addons.go:431] installing /etc/kubernetes/addons/registry-svc.yaml
	I0805 22:50:44.377499   18059 ssh_runner.go:362] scp registry/registry-svc.yaml --> /etc/kubernetes/addons/registry-svc.yaml (398 bytes)
	I0805 22:50:44.406029   18059 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml
	I0805 22:50:44.417402   18059 node_ready.go:35] waiting up to 6m0s for node "addons-435364" to be "Ready" ...
	I0805 22:50:44.420727   18059 node_ready.go:49] node "addons-435364" has status "Ready":"True"
	I0805 22:50:44.420767   18059 node_ready.go:38] duration metric: took 3.317462ms for node "addons-435364" to be "Ready" ...
	I0805 22:50:44.420779   18059 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0805 22:50:44.433133   18059 pod_ready.go:78] waiting up to 6m0s for pod "coredns-7db6d8ff4d-ng8rk" in "kube-system" namespace to be "Ready" ...
	I0805 22:50:44.465287   18059 addons.go:431] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml
	I0805 22:50:44.465312   18059 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshotclasses.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml (6471 bytes)
	I0805 22:50:44.466385   18059 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0805 22:50:44.482026   18059 addons.go:431] installing /etc/kubernetes/addons/rbac-hostpath.yaml
	I0805 22:50:44.482048   18059 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-hostpath.yaml --> /etc/kubernetes/addons/rbac-hostpath.yaml (4266 bytes)
	I0805 22:50:44.512219   18059 addons.go:431] installing /etc/kubernetes/addons/ig-serviceaccount.yaml
	I0805 22:50:44.512240   18059 ssh_runner.go:362] scp inspektor-gadget/ig-serviceaccount.yaml --> /etc/kubernetes/addons/ig-serviceaccount.yaml (80 bytes)
	I0805 22:50:44.523110   18059 addons.go:431] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0805 22:50:44.523131   18059 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1907 bytes)
	I0805 22:50:44.542913   18059 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml
	I0805 22:50:44.555948   18059 addons.go:431] installing /etc/kubernetes/addons/helm-tiller-svc.yaml
	I0805 22:50:44.555973   18059 ssh_runner.go:362] scp helm-tiller/helm-tiller-svc.yaml --> /etc/kubernetes/addons/helm-tiller-svc.yaml (951 bytes)
	I0805 22:50:44.561851   18059 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/deployment.yaml
	I0805 22:50:44.625752   18059 addons.go:431] installing /etc/kubernetes/addons/yakd-crb.yaml
	I0805 22:50:44.625779   18059 ssh_runner.go:362] scp yakd/yakd-crb.yaml --> /etc/kubernetes/addons/yakd-crb.yaml (422 bytes)
	I0805 22:50:44.628213   18059 addons.go:431] installing /etc/kubernetes/addons/registry-proxy.yaml
	I0805 22:50:44.628240   18059 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-proxy.yaml (947 bytes)
	I0805 22:50:44.653550   18059 addons.go:431] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml
	I0805 22:50:44.653576   18059 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshotcontents.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml (23126 bytes)
	I0805 22:50:44.660568   18059 addons.go:431] installing /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml
	I0805 22:50:44.660594   18059 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-health-monitor-controller.yaml --> /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml (3038 bytes)
	I0805 22:50:44.690970   18059 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I0805 22:50:44.709226   18059 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/helm-tiller-dp.yaml -f /etc/kubernetes/addons/helm-tiller-rbac.yaml -f /etc/kubernetes/addons/helm-tiller-svc.yaml
	I0805 22:50:44.729037   18059 addons.go:431] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0805 22:50:44.729067   18059 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0805 22:50:44.752497   18059 addons.go:431] installing /etc/kubernetes/addons/ig-role.yaml
	I0805 22:50:44.752517   18059 ssh_runner.go:362] scp inspektor-gadget/ig-role.yaml --> /etc/kubernetes/addons/ig-role.yaml (210 bytes)
	I0805 22:50:44.817527   18059 addons.go:431] installing /etc/kubernetes/addons/yakd-svc.yaml
	I0805 22:50:44.817548   18059 ssh_runner.go:362] scp yakd/yakd-svc.yaml --> /etc/kubernetes/addons/yakd-svc.yaml (412 bytes)
	I0805 22:50:44.854541   18059 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml
	I0805 22:50:44.869810   18059 addons.go:431] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml
	I0805 22:50:44.869833   18059 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshots.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml (19582 bytes)
	I0805 22:50:44.872801   18059 addons.go:431] installing /etc/kubernetes/addons/rbac-external-provisioner.yaml
	I0805 22:50:44.872815   18059 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-provisioner.yaml --> /etc/kubernetes/addons/rbac-external-provisioner.yaml (4442 bytes)
	I0805 22:50:44.917015   18059 addons.go:431] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0805 22:50:44.917035   18059 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0805 22:50:44.962004   18059 addons.go:431] installing /etc/kubernetes/addons/ig-rolebinding.yaml
	I0805 22:50:44.962026   18059 ssh_runner.go:362] scp inspektor-gadget/ig-rolebinding.yaml --> /etc/kubernetes/addons/ig-rolebinding.yaml (244 bytes)
	I0805 22:50:45.080496   18059 addons.go:431] installing /etc/kubernetes/addons/yakd-dp.yaml
	I0805 22:50:45.080523   18059 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/yakd-dp.yaml (2017 bytes)
	I0805 22:50:45.119095   18059 addons.go:431] installing /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml
	I0805 22:50:45.119134   18059 ssh_runner.go:362] scp volumesnapshots/rbac-volume-snapshot-controller.yaml --> /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml (3545 bytes)
	I0805 22:50:45.130968   18059 addons.go:431] installing /etc/kubernetes/addons/rbac-external-resizer.yaml
	I0805 22:50:45.130988   18059 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-resizer.yaml --> /etc/kubernetes/addons/rbac-external-resizer.yaml (2943 bytes)
	I0805 22:50:45.160034   18059 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml
	I0805 22:50:45.223173   18059 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0805 22:50:45.249810   18059 addons.go:431] installing /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0805 22:50:45.249831   18059 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml (1475 bytes)
	I0805 22:50:45.282668   18059 addons.go:431] installing /etc/kubernetes/addons/ig-clusterrole.yaml
	I0805 22:50:45.282700   18059 ssh_runner.go:362] scp inspektor-gadget/ig-clusterrole.yaml --> /etc/kubernetes/addons/ig-clusterrole.yaml (1485 bytes)
	I0805 22:50:45.302054   18059 addons.go:431] installing /etc/kubernetes/addons/rbac-external-snapshotter.yaml
	I0805 22:50:45.302080   18059 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-snapshotter.yaml --> /etc/kubernetes/addons/rbac-external-snapshotter.yaml (3149 bytes)
	I0805 22:50:45.484584   18059 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-attacher.yaml
	I0805 22:50:45.484604   18059 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-attacher.yaml (2143 bytes)
	I0805 22:50:45.537109   18059 addons.go:431] installing /etc/kubernetes/addons/ig-clusterrolebinding.yaml
	I0805 22:50:45.537147   18059 ssh_runner.go:362] scp inspektor-gadget/ig-clusterrolebinding.yaml --> /etc/kubernetes/addons/ig-clusterrolebinding.yaml (274 bytes)
	I0805 22:50:45.608780   18059 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0805 22:50:45.635156   18059 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml
	I0805 22:50:45.635187   18059 ssh_runner.go:362] scp csi-hostpath-driver/deploy/csi-hostpath-driverinfo.yaml --> /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml (1274 bytes)
	I0805 22:50:45.636895   18059 addons.go:431] installing /etc/kubernetes/addons/ig-crd.yaml
	I0805 22:50:45.636949   18059 ssh_runner.go:362] scp inspektor-gadget/ig-crd.yaml --> /etc/kubernetes/addons/ig-crd.yaml (5216 bytes)
	I0805 22:50:45.841167   18059 addons.go:431] installing /etc/kubernetes/addons/ig-daemonset.yaml
	I0805 22:50:45.841193   18059 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-daemonset.yaml (7735 bytes)
	I0805 22:50:45.847559   18059 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-plugin.yaml
	I0805 22:50:45.847620   18059 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-plugin.yaml (8201 bytes)
	I0805 22:50:46.092936   18059 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/ig-namespace.yaml -f /etc/kubernetes/addons/ig-serviceaccount.yaml -f /etc/kubernetes/addons/ig-role.yaml -f /etc/kubernetes/addons/ig-rolebinding.yaml -f /etc/kubernetes/addons/ig-clusterrole.yaml -f /etc/kubernetes/addons/ig-clusterrolebinding.yaml -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-daemonset.yaml
	I0805 22:50:46.199292   18059 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-resizer.yaml
	I0805 22:50:46.199312   18059 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-resizer.yaml (2191 bytes)
	I0805 22:50:46.384460   18059 ssh_runner.go:235] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.39.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.30.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -": (2.205137516s)
	I0805 22:50:46.384491   18059 start.go:971] {"host.minikube.internal": 192.168.39.1} host record injected into CoreDNS's ConfigMap
	I0805 22:50:46.467646   18059 pod_ready.go:102] pod "coredns-7db6d8ff4d-ng8rk" in "kube-system" namespace has status "Ready":"False"
	I0805 22:50:46.511101   18059 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I0805 22:50:46.511128   18059 ssh_runner.go:362] scp csi-hostpath-driver/deploy/csi-hostpath-storageclass.yaml --> /etc/kubernetes/addons/csi-hostpath-storageclass.yaml (846 bytes)
	I0805 22:50:46.916480   18059 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I0805 22:50:46.940397   18059 kapi.go:214] "coredns" deployment in "kube-system" namespace and "addons-435364" context rescaled to 1 replicas
	I0805 22:50:48.540472   18059 pod_ready.go:102] pod "coredns-7db6d8ff4d-ng8rk" in "kube-system" namespace has status "Ready":"False"
	I0805 22:50:49.084244   18059 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml: (4.76554551s)
	I0805 22:50:49.084293   18059 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (4.732388526s)
	I0805 22:50:49.084304   18059 main.go:141] libmachine: Making call to close driver server
	I0805 22:50:49.084320   18059 main.go:141] libmachine: (addons-435364) Calling .Close
	I0805 22:50:49.084329   18059 main.go:141] libmachine: Making call to close driver server
	I0805 22:50:49.084344   18059 main.go:141] libmachine: (addons-435364) Calling .Close
	I0805 22:50:49.084415   18059 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml: (4.67834613s)
	I0805 22:50:49.084449   18059 main.go:141] libmachine: Making call to close driver server
	I0805 22:50:49.084461   18059 main.go:141] libmachine: (addons-435364) Calling .Close
	I0805 22:50:49.084484   18059 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (4.618070914s)
	I0805 22:50:49.084514   18059 main.go:141] libmachine: Making call to close driver server
	I0805 22:50:49.084528   18059 main.go:141] libmachine: (addons-435364) Calling .Close
	I0805 22:50:49.084845   18059 main.go:141] libmachine: Successfully made call to close driver server
	I0805 22:50:49.084870   18059 main.go:141] libmachine: Successfully made call to close driver server
	I0805 22:50:49.084880   18059 main.go:141] libmachine: Successfully made call to close driver server
	I0805 22:50:49.084894   18059 main.go:141] libmachine: Making call to close connection to plugin binary
	I0805 22:50:49.084896   18059 main.go:141] libmachine: Making call to close connection to plugin binary
	I0805 22:50:49.084902   18059 main.go:141] libmachine: Making call to close driver server
	I0805 22:50:49.084905   18059 main.go:141] libmachine: Making call to close driver server
	I0805 22:50:49.084904   18059 main.go:141] libmachine: (addons-435364) DBG | Closing plugin on server side
	I0805 22:50:49.084911   18059 main.go:141] libmachine: (addons-435364) Calling .Close
	I0805 22:50:49.084882   18059 main.go:141] libmachine: Making call to close connection to plugin binary
	I0805 22:50:49.084926   18059 main.go:141] libmachine: Making call to close driver server
	I0805 22:50:49.084929   18059 main.go:141] libmachine: Successfully made call to close driver server
	I0805 22:50:49.084932   18059 main.go:141] libmachine: (addons-435364) Calling .Close
	I0805 22:50:49.084937   18059 main.go:141] libmachine: Making call to close connection to plugin binary
	I0805 22:50:49.084945   18059 main.go:141] libmachine: Making call to close driver server
	I0805 22:50:49.084912   18059 main.go:141] libmachine: (addons-435364) Calling .Close
	I0805 22:50:49.084951   18059 main.go:141] libmachine: (addons-435364) Calling .Close
	I0805 22:50:49.084845   18059 main.go:141] libmachine: (addons-435364) DBG | Closing plugin on server side
	I0805 22:50:49.085155   18059 main.go:141] libmachine: (addons-435364) DBG | Closing plugin on server side
	I0805 22:50:49.085203   18059 main.go:141] libmachine: (addons-435364) DBG | Closing plugin on server side
	I0805 22:50:49.085218   18059 main.go:141] libmachine: (addons-435364) DBG | Closing plugin on server side
	I0805 22:50:49.085232   18059 main.go:141] libmachine: Successfully made call to close driver server
	I0805 22:50:49.085240   18059 main.go:141] libmachine: Making call to close connection to plugin binary
	I0805 22:50:49.085241   18059 main.go:141] libmachine: Successfully made call to close driver server
	I0805 22:50:49.085248   18059 main.go:141] libmachine: Making call to close connection to plugin binary
	I0805 22:50:49.085420   18059 main.go:141] libmachine: Successfully made call to close driver server
	I0805 22:50:49.085440   18059 main.go:141] libmachine: Making call to close connection to plugin binary
	I0805 22:50:49.086546   18059 main.go:141] libmachine: (addons-435364) DBG | Closing plugin on server side
	I0805 22:50:49.086561   18059 main.go:141] libmachine: Successfully made call to close driver server
	I0805 22:50:49.086581   18059 main.go:141] libmachine: Making call to close connection to plugin binary
	I0805 22:50:49.148718   18059 main.go:141] libmachine: Making call to close driver server
	I0805 22:50:49.148746   18059 main.go:141] libmachine: (addons-435364) Calling .Close
	I0805 22:50:49.149042   18059 main.go:141] libmachine: Successfully made call to close driver server
	I0805 22:50:49.149085   18059 main.go:141] libmachine: Making call to close connection to plugin binary
	W0805 22:50:49.149192   18059 out.go:239] ! Enabling 'default-storageclass' returned an error: running callbacks: [Error making standard the default storage class: Error while marking storage class local-path as non-default: Operation cannot be fulfilled on storageclasses.storage.k8s.io "local-path": the object has been modified; please apply your changes to the latest version and try again]
	I0805 22:50:49.164023   18059 main.go:141] libmachine: Making call to close driver server
	I0805 22:50:49.164043   18059 main.go:141] libmachine: (addons-435364) Calling .Close
	I0805 22:50:49.164351   18059 main.go:141] libmachine: Successfully made call to close driver server
	I0805 22:50:49.164369   18059 main.go:141] libmachine: (addons-435364) DBG | Closing plugin on server side
	I0805 22:50:49.164373   18059 main.go:141] libmachine: Making call to close connection to plugin binary
	I0805 22:50:49.440491   18059 pod_ready.go:92] pod "coredns-7db6d8ff4d-ng8rk" in "kube-system" namespace has status "Ready":"True"
	I0805 22:50:49.440513   18059 pod_ready.go:81] duration metric: took 5.007354905s for pod "coredns-7db6d8ff4d-ng8rk" in "kube-system" namespace to be "Ready" ...
	I0805 22:50:49.440522   18059 pod_ready.go:78] waiting up to 6m0s for pod "coredns-7db6d8ff4d-qc4fj" in "kube-system" namespace to be "Ready" ...
	I0805 22:50:49.445585   18059 pod_ready.go:92] pod "coredns-7db6d8ff4d-qc4fj" in "kube-system" namespace has status "Ready":"True"
	I0805 22:50:49.445604   18059 pod_ready.go:81] duration metric: took 5.075791ms for pod "coredns-7db6d8ff4d-qc4fj" in "kube-system" namespace to be "Ready" ...
	I0805 22:50:49.445613   18059 pod_ready.go:78] waiting up to 6m0s for pod "etcd-addons-435364" in "kube-system" namespace to be "Ready" ...
	I0805 22:50:49.450306   18059 pod_ready.go:92] pod "etcd-addons-435364" in "kube-system" namespace has status "Ready":"True"
	I0805 22:50:49.450331   18059 pod_ready.go:81] duration metric: took 4.710521ms for pod "etcd-addons-435364" in "kube-system" namespace to be "Ready" ...
	I0805 22:50:49.450347   18059 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-addons-435364" in "kube-system" namespace to be "Ready" ...
	I0805 22:50:49.455933   18059 pod_ready.go:92] pod "kube-apiserver-addons-435364" in "kube-system" namespace has status "Ready":"True"
	I0805 22:50:49.455962   18059 pod_ready.go:81] duration metric: took 5.604264ms for pod "kube-apiserver-addons-435364" in "kube-system" namespace to be "Ready" ...
	I0805 22:50:49.455974   18059 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-addons-435364" in "kube-system" namespace to be "Ready" ...
	I0805 22:50:49.460394   18059 pod_ready.go:92] pod "kube-controller-manager-addons-435364" in "kube-system" namespace has status "Ready":"True"
	I0805 22:50:49.460419   18059 pod_ready.go:81] duration metric: took 4.436596ms for pod "kube-controller-manager-addons-435364" in "kube-system" namespace to be "Ready" ...
	I0805 22:50:49.460431   18059 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-lt8r2" in "kube-system" namespace to be "Ready" ...
	I0805 22:50:49.846279   18059 pod_ready.go:92] pod "kube-proxy-lt8r2" in "kube-system" namespace has status "Ready":"True"
	I0805 22:50:49.846311   18059 pod_ready.go:81] duration metric: took 385.870407ms for pod "kube-proxy-lt8r2" in "kube-system" namespace to be "Ready" ...
	I0805 22:50:49.846324   18059 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-addons-435364" in "kube-system" namespace to be "Ready" ...
	I0805 22:50:50.237689   18059 pod_ready.go:92] pod "kube-scheduler-addons-435364" in "kube-system" namespace has status "Ready":"True"
	I0805 22:50:50.237717   18059 pod_ready.go:81] duration metric: took 391.384837ms for pod "kube-scheduler-addons-435364" in "kube-system" namespace to be "Ready" ...
	I0805 22:50:50.237728   18059 pod_ready.go:38] duration metric: took 5.816931704s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0805 22:50:50.237746   18059 api_server.go:52] waiting for apiserver process to appear ...
	I0805 22:50:50.237808   18059 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0805 22:50:50.887724   18059 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_application_credentials.json (162 bytes)
	I0805 22:50:50.887759   18059 main.go:141] libmachine: (addons-435364) Calling .GetSSHHostname
	I0805 22:50:50.890430   18059 main.go:141] libmachine: (addons-435364) DBG | domain addons-435364 has defined MAC address 52:54:00:99:11:e1 in network mk-addons-435364
	I0805 22:50:50.890825   18059 main.go:141] libmachine: (addons-435364) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:99:11:e1", ip: ""} in network mk-addons-435364: {Iface:virbr1 ExpiryTime:2024-08-05 23:50:03 +0000 UTC Type:0 Mac:52:54:00:99:11:e1 Iaid: IPaddr:192.168.39.129 Prefix:24 Hostname:addons-435364 Clientid:01:52:54:00:99:11:e1}
	I0805 22:50:50.890854   18059 main.go:141] libmachine: (addons-435364) DBG | domain addons-435364 has defined IP address 192.168.39.129 and MAC address 52:54:00:99:11:e1 in network mk-addons-435364
	I0805 22:50:50.891067   18059 main.go:141] libmachine: (addons-435364) Calling .GetSSHPort
	I0805 22:50:50.891249   18059 main.go:141] libmachine: (addons-435364) Calling .GetSSHKeyPath
	I0805 22:50:50.891408   18059 main.go:141] libmachine: (addons-435364) Calling .GetSSHUsername
	I0805 22:50:50.891567   18059 sshutil.go:53] new ssh client: &{IP:192.168.39.129 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19373-9606/.minikube/machines/addons-435364/id_rsa Username:docker}
	I0805 22:50:51.342921   18059 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_cloud_project (12 bytes)
	I0805 22:50:51.486984   18059 addons.go:234] Setting addon gcp-auth=true in "addons-435364"
	I0805 22:50:51.487037   18059 host.go:66] Checking if "addons-435364" exists ...
	I0805 22:50:51.487344   18059 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0805 22:50:51.487376   18059 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0805 22:50:51.502034   18059 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38685
	I0805 22:50:51.502502   18059 main.go:141] libmachine: () Calling .GetVersion
	I0805 22:50:51.503009   18059 main.go:141] libmachine: Using API Version  1
	I0805 22:50:51.503029   18059 main.go:141] libmachine: () Calling .SetConfigRaw
	I0805 22:50:51.503428   18059 main.go:141] libmachine: () Calling .GetMachineName
	I0805 22:50:51.503865   18059 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0805 22:50:51.503891   18059 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0805 22:50:51.518846   18059 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43837
	I0805 22:50:51.519325   18059 main.go:141] libmachine: () Calling .GetVersion
	I0805 22:50:51.519856   18059 main.go:141] libmachine: Using API Version  1
	I0805 22:50:51.519903   18059 main.go:141] libmachine: () Calling .SetConfigRaw
	I0805 22:50:51.520237   18059 main.go:141] libmachine: () Calling .GetMachineName
	I0805 22:50:51.520405   18059 main.go:141] libmachine: (addons-435364) Calling .GetState
	I0805 22:50:51.521878   18059 main.go:141] libmachine: (addons-435364) Calling .DriverName
	I0805 22:50:51.522124   18059 ssh_runner.go:195] Run: cat /var/lib/minikube/google_application_credentials.json
	I0805 22:50:51.522143   18059 main.go:141] libmachine: (addons-435364) Calling .GetSSHHostname
	I0805 22:50:51.524687   18059 main.go:141] libmachine: (addons-435364) DBG | domain addons-435364 has defined MAC address 52:54:00:99:11:e1 in network mk-addons-435364
	I0805 22:50:51.525134   18059 main.go:141] libmachine: (addons-435364) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:99:11:e1", ip: ""} in network mk-addons-435364: {Iface:virbr1 ExpiryTime:2024-08-05 23:50:03 +0000 UTC Type:0 Mac:52:54:00:99:11:e1 Iaid: IPaddr:192.168.39.129 Prefix:24 Hostname:addons-435364 Clientid:01:52:54:00:99:11:e1}
	I0805 22:50:51.525159   18059 main.go:141] libmachine: (addons-435364) DBG | domain addons-435364 has defined IP address 192.168.39.129 and MAC address 52:54:00:99:11:e1 in network mk-addons-435364
	I0805 22:50:51.525164   18059 main.go:141] libmachine: (addons-435364) Calling .GetSSHPort
	I0805 22:50:51.525306   18059 main.go:141] libmachine: (addons-435364) Calling .GetSSHKeyPath
	I0805 22:50:51.525448   18059 main.go:141] libmachine: (addons-435364) Calling .GetSSHUsername
	I0805 22:50:51.525559   18059 sshutil.go:53] new ssh client: &{IP:192.168.39.129 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19373-9606/.minikube/machines/addons-435364/id_rsa Username:docker}
	I0805 22:50:52.883768   18059 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml: (8.340815998s)
	I0805 22:50:52.883819   18059 main.go:141] libmachine: Making call to close driver server
	I0805 22:50:52.883833   18059 main.go:141] libmachine: (addons-435364) Calling .Close
	I0805 22:50:52.883860   18059 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/deployment.yaml: (8.32198163s)
	I0805 22:50:52.883901   18059 main.go:141] libmachine: Making call to close driver server
	I0805 22:50:52.883916   18059 main.go:141] libmachine: (addons-435364) Calling .Close
	I0805 22:50:52.883956   18059 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/helm-tiller-dp.yaml -f /etc/kubernetes/addons/helm-tiller-rbac.yaml -f /etc/kubernetes/addons/helm-tiller-svc.yaml: (8.174704669s)
	I0805 22:50:52.883917   18059 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml: (8.19291952s)
	I0805 22:50:52.883988   18059 main.go:141] libmachine: Making call to close driver server
	I0805 22:50:52.884000   18059 main.go:141] libmachine: (addons-435364) Calling .Close
	I0805 22:50:52.884005   18059 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml: (8.029430947s)
	I0805 22:50:52.883989   18059 main.go:141] libmachine: Making call to close driver server
	I0805 22:50:52.884039   18059 main.go:141] libmachine: Making call to close driver server
	I0805 22:50:52.884055   18059 main.go:141] libmachine: (addons-435364) Calling .Close
	I0805 22:50:52.884056   18059 main.go:141] libmachine: (addons-435364) Calling .Close
	I0805 22:50:52.884053   18059 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml: (7.723989148s)
	I0805 22:50:52.884097   18059 main.go:141] libmachine: Making call to close driver server
	I0805 22:50:52.884104   18059 main.go:141] libmachine: (addons-435364) Calling .Close
	I0805 22:50:52.884132   18059 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (7.660936427s)
	I0805 22:50:52.884151   18059 main.go:141] libmachine: Making call to close driver server
	I0805 22:50:52.884181   18059 main.go:141] libmachine: (addons-435364) Calling .Close
	I0805 22:50:52.884201   18059 main.go:141] libmachine: (addons-435364) DBG | Closing plugin on server side
	I0805 22:50:52.884237   18059 main.go:141] libmachine: Successfully made call to close driver server
	I0805 22:50:52.884244   18059 main.go:141] libmachine: Making call to close connection to plugin binary
	I0805 22:50:52.884252   18059 main.go:141] libmachine: Making call to close driver server
	I0805 22:50:52.884259   18059 main.go:141] libmachine: (addons-435364) Calling .Close
	I0805 22:50:52.884310   18059 main.go:141] libmachine: Successfully made call to close driver server
	I0805 22:50:52.884317   18059 main.go:141] libmachine: Making call to close connection to plugin binary
	I0805 22:50:52.884324   18059 main.go:141] libmachine: Making call to close driver server
	I0805 22:50:52.884324   18059 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (7.275508668s)
	I0805 22:50:52.884332   18059 main.go:141] libmachine: (addons-435364) Calling .Close
	W0805 22:50:52.884351   18059 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I0805 22:50:52.884383   18059 retry.go:31] will retry after 291.464679ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I0805 22:50:52.884441   18059 main.go:141] libmachine: Successfully made call to close driver server
	I0805 22:50:52.884451   18059 main.go:141] libmachine: Making call to close connection to plugin binary
	I0805 22:50:52.884459   18059 main.go:141] libmachine: Making call to close driver server
	I0805 22:50:52.884466   18059 main.go:141] libmachine: (addons-435364) Calling .Close
	I0805 22:50:52.884493   18059 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/ig-namespace.yaml -f /etc/kubernetes/addons/ig-serviceaccount.yaml -f /etc/kubernetes/addons/ig-role.yaml -f /etc/kubernetes/addons/ig-rolebinding.yaml -f /etc/kubernetes/addons/ig-clusterrole.yaml -f /etc/kubernetes/addons/ig-clusterrolebinding.yaml -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-daemonset.yaml: (6.791499664s)
	I0805 22:50:52.884511   18059 main.go:141] libmachine: Making call to close driver server
	I0805 22:50:52.884522   18059 main.go:141] libmachine: (addons-435364) Calling .Close
	I0805 22:50:52.884529   18059 main.go:141] libmachine: (addons-435364) DBG | Closing plugin on server side
	I0805 22:50:52.884549   18059 main.go:141] libmachine: Successfully made call to close driver server
	I0805 22:50:52.884556   18059 main.go:141] libmachine: Making call to close connection to plugin binary
	I0805 22:50:52.884564   18059 main.go:141] libmachine: Making call to close driver server
	I0805 22:50:52.884570   18059 main.go:141] libmachine: (addons-435364) Calling .Close
	I0805 22:50:52.884595   18059 main.go:141] libmachine: Successfully made call to close driver server
	I0805 22:50:52.884603   18059 main.go:141] libmachine: Making call to close connection to plugin binary
	I0805 22:50:52.884612   18059 main.go:141] libmachine: Making call to close driver server
	I0805 22:50:52.884619   18059 main.go:141] libmachine: (addons-435364) Calling .Close
	I0805 22:50:52.884680   18059 main.go:141] libmachine: (addons-435364) DBG | Closing plugin on server side
	I0805 22:50:52.884700   18059 main.go:141] libmachine: Successfully made call to close driver server
	I0805 22:50:52.884708   18059 main.go:141] libmachine: Making call to close connection to plugin binary
	I0805 22:50:52.884714   18059 main.go:141] libmachine: Making call to close driver server
	I0805 22:50:52.884721   18059 main.go:141] libmachine: (addons-435364) Calling .Close
	I0805 22:50:52.886002   18059 main.go:141] libmachine: (addons-435364) DBG | Closing plugin on server side
	I0805 22:50:52.886034   18059 main.go:141] libmachine: Successfully made call to close driver server
	I0805 22:50:52.886042   18059 main.go:141] libmachine: Making call to close connection to plugin binary
	I0805 22:50:52.886058   18059 addons.go:475] Verifying addon ingress=true in "addons-435364"
	I0805 22:50:52.886293   18059 main.go:141] libmachine: (addons-435364) DBG | Closing plugin on server side
	I0805 22:50:52.886326   18059 main.go:141] libmachine: Successfully made call to close driver server
	I0805 22:50:52.886333   18059 main.go:141] libmachine: Making call to close connection to plugin binary
	I0805 22:50:52.886732   18059 main.go:141] libmachine: (addons-435364) DBG | Closing plugin on server side
	I0805 22:50:52.886762   18059 main.go:141] libmachine: Successfully made call to close driver server
	I0805 22:50:52.886783   18059 main.go:141] libmachine: Making call to close connection to plugin binary
	I0805 22:50:52.886796   18059 addons.go:475] Verifying addon metrics-server=true in "addons-435364"
	I0805 22:50:52.886797   18059 main.go:141] libmachine: Successfully made call to close driver server
	I0805 22:50:52.886815   18059 main.go:141] libmachine: Making call to close connection to plugin binary
	I0805 22:50:52.886826   18059 main.go:141] libmachine: Making call to close driver server
	I0805 22:50:52.886830   18059 main.go:141] libmachine: (addons-435364) DBG | Closing plugin on server side
	I0805 22:50:52.886836   18059 main.go:141] libmachine: (addons-435364) Calling .Close
	I0805 22:50:52.886852   18059 main.go:141] libmachine: (addons-435364) DBG | Closing plugin on server side
	I0805 22:50:52.886867   18059 main.go:141] libmachine: (addons-435364) DBG | Closing plugin on server side
	I0805 22:50:52.887444   18059 main.go:141] libmachine: (addons-435364) DBG | Closing plugin on server side
	I0805 22:50:52.887477   18059 main.go:141] libmachine: Successfully made call to close driver server
	I0805 22:50:52.887485   18059 main.go:141] libmachine: Making call to close connection to plugin binary
	I0805 22:50:52.887777   18059 main.go:141] libmachine: (addons-435364) DBG | Closing plugin on server side
	I0805 22:50:52.887812   18059 main.go:141] libmachine: Successfully made call to close driver server
	I0805 22:50:52.887823   18059 main.go:141] libmachine: Making call to close connection to plugin binary
	I0805 22:50:52.887872   18059 main.go:141] libmachine: Successfully made call to close driver server
	I0805 22:50:52.887891   18059 main.go:141] libmachine: Making call to close connection to plugin binary
	I0805 22:50:52.886765   18059 main.go:141] libmachine: (addons-435364) DBG | Closing plugin on server side
	I0805 22:50:52.886817   18059 main.go:141] libmachine: Successfully made call to close driver server
	I0805 22:50:52.887949   18059 main.go:141] libmachine: Making call to close connection to plugin binary
	I0805 22:50:52.888147   18059 out.go:177] * Verifying ingress addon...
	I0805 22:50:52.888699   18059 main.go:141] libmachine: Successfully made call to close driver server
	I0805 22:50:52.888716   18059 main.go:141] libmachine: Making call to close connection to plugin binary
	I0805 22:50:52.888724   18059 main.go:141] libmachine: Making call to close driver server
	I0805 22:50:52.888731   18059 main.go:141] libmachine: (addons-435364) Calling .Close
	I0805 22:50:52.889189   18059 main.go:141] libmachine: (addons-435364) DBG | Closing plugin on server side
	I0805 22:50:52.889223   18059 main.go:141] libmachine: Successfully made call to close driver server
	I0805 22:50:52.889230   18059 main.go:141] libmachine: Making call to close connection to plugin binary
	I0805 22:50:52.889239   18059 addons.go:475] Verifying addon registry=true in "addons-435364"
	I0805 22:50:52.889458   18059 out.go:177] * To access YAKD - Kubernetes Dashboard, wait for Pod to be ready and run the following command:
	
		minikube -p addons-435364 service yakd-dashboard -n yakd-dashboard
	
	I0805 22:50:52.890368   18059 kapi.go:75] Waiting for pod with label "app.kubernetes.io/name=ingress-nginx" in ns "ingress-nginx" ...
	I0805 22:50:52.890673   18059 out.go:177] * Verifying registry addon...
	I0805 22:50:52.892712   18059 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=registry" in ns "kube-system" ...
	I0805 22:50:52.911983   18059 kapi.go:86] Found 3 Pods for label selector app.kubernetes.io/name=ingress-nginx
	I0805 22:50:52.912007   18059 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0805 22:50:52.916782   18059 kapi.go:86] Found 2 Pods for label selector kubernetes.io/minikube-addons=registry
	I0805 22:50:52.916800   18059 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0805 22:50:53.176847   18059 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0805 22:50:53.403966   18059 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0805 22:50:53.406283   18059 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0805 22:50:53.873455   18059 ssh_runner.go:235] Completed: sudo pgrep -xnf kube-apiserver.*minikube.*: (3.635622169s)
	I0805 22:50:53.873470   18059 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml: (6.956933725s)
	I0805 22:50:53.873490   18059 api_server.go:72] duration metric: took 10.196307174s to wait for apiserver process to appear ...
	I0805 22:50:53.873497   18059 api_server.go:88] waiting for apiserver healthz status ...
	I0805 22:50:53.873518   18059 api_server.go:253] Checking apiserver healthz at https://192.168.39.129:8443/healthz ...
	I0805 22:50:53.873517   18059 main.go:141] libmachine: Making call to close driver server
	I0805 22:50:53.873523   18059 ssh_runner.go:235] Completed: cat /var/lib/minikube/google_application_credentials.json: (2.35138169s)
	I0805 22:50:53.873530   18059 main.go:141] libmachine: (addons-435364) Calling .Close
	I0805 22:50:53.873878   18059 main.go:141] libmachine: Successfully made call to close driver server
	I0805 22:50:53.873893   18059 main.go:141] libmachine: Making call to close connection to plugin binary
	I0805 22:50:53.873908   18059 main.go:141] libmachine: Making call to close driver server
	I0805 22:50:53.873916   18059 main.go:141] libmachine: (addons-435364) Calling .Close
	I0805 22:50:53.874197   18059 main.go:141] libmachine: Successfully made call to close driver server
	I0805 22:50:53.874218   18059 main.go:141] libmachine: Making call to close connection to plugin binary
	I0805 22:50:53.874202   18059 main.go:141] libmachine: (addons-435364) DBG | Closing plugin on server side
	I0805 22:50:53.874229   18059 addons.go:475] Verifying addon csi-hostpath-driver=true in "addons-435364"
	I0805 22:50:53.875063   18059 out.go:177]   - Using image gcr.io/k8s-minikube/gcp-auth-webhook:v0.1.2
	I0805 22:50:53.875924   18059 out.go:177] * Verifying csi-hostpath-driver addon...
	I0805 22:50:53.877314   18059 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.4.1
	I0805 22:50:53.878276   18059 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=csi-hostpath-driver" in ns "kube-system" ...
	I0805 22:50:53.878468   18059 addons.go:431] installing /etc/kubernetes/addons/gcp-auth-ns.yaml
	I0805 22:50:53.878479   18059 ssh_runner.go:362] scp gcp-auth/gcp-auth-ns.yaml --> /etc/kubernetes/addons/gcp-auth-ns.yaml (700 bytes)
	I0805 22:50:53.921505   18059 api_server.go:279] https://192.168.39.129:8443/healthz returned 200:
	ok
	I0805 22:50:53.924856   18059 api_server.go:141] control plane version: v1.30.3
	I0805 22:50:53.924878   18059 api_server.go:131] duration metric: took 51.374856ms to wait for apiserver health ...
	I0805 22:50:53.924886   18059 system_pods.go:43] waiting for kube-system pods to appear ...
	I0805 22:50:53.931844   18059 kapi.go:86] Found 3 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
	I0805 22:50:53.931865   18059 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0805 22:50:53.937825   18059 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0805 22:50:53.949822   18059 addons.go:431] installing /etc/kubernetes/addons/gcp-auth-service.yaml
	I0805 22:50:53.949841   18059 ssh_runner.go:362] scp gcp-auth/gcp-auth-service.yaml --> /etc/kubernetes/addons/gcp-auth-service.yaml (788 bytes)
	I0805 22:50:53.955641   18059 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0805 22:50:53.971438   18059 system_pods.go:59] 19 kube-system pods found
	I0805 22:50:53.971462   18059 system_pods.go:61] "coredns-7db6d8ff4d-ng8rk" [2091f1e9-b1aa-45fd-8197-0f661fcf784e] Running
	I0805 22:50:53.971466   18059 system_pods.go:61] "coredns-7db6d8ff4d-qc4fj" [2374285d-3c1f-4403-a6a7-c6bfd6ea2be9] Running
	I0805 22:50:53.971472   18059 system_pods.go:61] "csi-hostpath-attacher-0" [c3d74a8e-fdb7-463c-8ed0-89f152a701f1] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I0805 22:50:53.971476   18059 system_pods.go:61] "csi-hostpath-resizer-0" [2977ed62-99ff-4c08-8e71-b4f0c9bf67d3] Pending
	I0805 22:50:53.971484   18059 system_pods.go:61] "csi-hostpathplugin-sb9bm" [ca40b966-32e8-4c43-8ce2-7574141f44b2] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I0805 22:50:53.971491   18059 system_pods.go:61] "etcd-addons-435364" [575da880-ae3c-4192-95aa-5c659f5ccb5d] Running
	I0805 22:50:53.971495   18059 system_pods.go:61] "kube-apiserver-addons-435364" [45f478e1-eebb-4cde-bde2-f4d32decde9e] Running
	I0805 22:50:53.971498   18059 system_pods.go:61] "kube-controller-manager-addons-435364" [a9924751-aef6-4ba5-b29b-63491edecb83] Running
	I0805 22:50:53.971503   18059 system_pods.go:61] "kube-ingress-dns-minikube" [a3229854-d9da-4ed8-ad6f-5a4b35dd430f] Pending / Ready:ContainersNotReady (containers with unready status: [minikube-ingress-dns]) / ContainersReady:ContainersNotReady (containers with unready status: [minikube-ingress-dns])
	I0805 22:50:53.971508   18059 system_pods.go:61] "kube-proxy-lt8r2" [c1a7c99c-379f-4e2d-b241-4de97adffa76] Running
	I0805 22:50:53.971511   18059 system_pods.go:61] "kube-scheduler-addons-435364" [127dd332-e714-4512-9460-acc0e7b194ff] Running
	I0805 22:50:53.971515   18059 system_pods.go:61] "metrics-server-c59844bb4-m9t52" [f825462d-de15-4aa7-9436-76eda3bbd66f] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0805 22:50:53.971523   18059 system_pods.go:61] "nvidia-device-plugin-daemonset-jk9q5" [1a23f5f9-2fc4-453c-9381-177bf606032d] Pending / Ready:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr]) / ContainersReady:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr])
	I0805 22:50:53.971528   18059 system_pods.go:61] "registry-698f998955-4stmn" [c0716044-6d96-44a5-ab8d-03023e2da298] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I0805 22:50:53.971535   18059 system_pods.go:61] "registry-proxy-2dplh" [a8ad0955-3945-41ac-a7b2-78bf1d724a1a] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I0805 22:50:53.971540   18059 system_pods.go:61] "snapshot-controller-745499f584-7jwrf" [19b31468-b55d-4eb4-a008-7b9b9af0e582] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I0805 22:50:53.971547   18059 system_pods.go:61] "snapshot-controller-745499f584-lphmq" [24eb6083-c3a3-4873-8a71-0e4c16b350ff] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I0805 22:50:53.971551   18059 system_pods.go:61] "storage-provisioner" [cfbc5ee9-491f-4c8d-aecc-72ba061092ec] Running
	I0805 22:50:53.971557   18059 system_pods.go:61] "tiller-deploy-6677d64bcd-qn6ln" [4188df06-7e5f-4218-bf0f-658f8c51bfb9] Pending / Ready:ContainersNotReady (containers with unready status: [tiller]) / ContainersReady:ContainersNotReady (containers with unready status: [tiller])
	I0805 22:50:53.971564   18059 system_pods.go:74] duration metric: took 46.673256ms to wait for pod list to return data ...
	I0805 22:50:53.971573   18059 default_sa.go:34] waiting for default service account to be created ...
	I0805 22:50:53.977852   18059 default_sa.go:45] found service account: "default"
	I0805 22:50:53.977871   18059 default_sa.go:55] duration metric: took 6.291945ms for default service account to be created ...
	I0805 22:50:53.977878   18059 system_pods.go:116] waiting for k8s-apps to be running ...
	I0805 22:50:54.005241   18059 system_pods.go:86] 19 kube-system pods found
	I0805 22:50:54.005271   18059 system_pods.go:89] "coredns-7db6d8ff4d-ng8rk" [2091f1e9-b1aa-45fd-8197-0f661fcf784e] Running
	I0805 22:50:54.005278   18059 system_pods.go:89] "coredns-7db6d8ff4d-qc4fj" [2374285d-3c1f-4403-a6a7-c6bfd6ea2be9] Running
	I0805 22:50:54.005287   18059 system_pods.go:89] "csi-hostpath-attacher-0" [c3d74a8e-fdb7-463c-8ed0-89f152a701f1] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I0805 22:50:54.005296   18059 system_pods.go:89] "csi-hostpath-resizer-0" [2977ed62-99ff-4c08-8e71-b4f0c9bf67d3] Pending / Ready:ContainersNotReady (containers with unready status: [csi-resizer]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-resizer])
	I0805 22:50:54.005311   18059 system_pods.go:89] "csi-hostpathplugin-sb9bm" [ca40b966-32e8-4c43-8ce2-7574141f44b2] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I0805 22:50:54.005322   18059 system_pods.go:89] "etcd-addons-435364" [575da880-ae3c-4192-95aa-5c659f5ccb5d] Running
	I0805 22:50:54.005333   18059 system_pods.go:89] "kube-apiserver-addons-435364" [45f478e1-eebb-4cde-bde2-f4d32decde9e] Running
	I0805 22:50:54.005341   18059 system_pods.go:89] "kube-controller-manager-addons-435364" [a9924751-aef6-4ba5-b29b-63491edecb83] Running
	I0805 22:50:54.005354   18059 system_pods.go:89] "kube-ingress-dns-minikube" [a3229854-d9da-4ed8-ad6f-5a4b35dd430f] Pending / Ready:ContainersNotReady (containers with unready status: [minikube-ingress-dns]) / ContainersReady:ContainersNotReady (containers with unready status: [minikube-ingress-dns])
	I0805 22:50:54.005360   18059 system_pods.go:89] "kube-proxy-lt8r2" [c1a7c99c-379f-4e2d-b241-4de97adffa76] Running
	I0805 22:50:54.005366   18059 system_pods.go:89] "kube-scheduler-addons-435364" [127dd332-e714-4512-9460-acc0e7b194ff] Running
	I0805 22:50:54.005375   18059 system_pods.go:89] "metrics-server-c59844bb4-m9t52" [f825462d-de15-4aa7-9436-76eda3bbd66f] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0805 22:50:54.005389   18059 system_pods.go:89] "nvidia-device-plugin-daemonset-jk9q5" [1a23f5f9-2fc4-453c-9381-177bf606032d] Pending / Ready:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr]) / ContainersReady:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr])
	I0805 22:50:54.005402   18059 system_pods.go:89] "registry-698f998955-4stmn" [c0716044-6d96-44a5-ab8d-03023e2da298] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I0805 22:50:54.005415   18059 system_pods.go:89] "registry-proxy-2dplh" [a8ad0955-3945-41ac-a7b2-78bf1d724a1a] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I0805 22:50:54.005427   18059 system_pods.go:89] "snapshot-controller-745499f584-7jwrf" [19b31468-b55d-4eb4-a008-7b9b9af0e582] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I0805 22:50:54.005442   18059 system_pods.go:89] "snapshot-controller-745499f584-lphmq" [24eb6083-c3a3-4873-8a71-0e4c16b350ff] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I0805 22:50:54.005452   18059 system_pods.go:89] "storage-provisioner" [cfbc5ee9-491f-4c8d-aecc-72ba061092ec] Running
	I0805 22:50:54.005463   18059 system_pods.go:89] "tiller-deploy-6677d64bcd-qn6ln" [4188df06-7e5f-4218-bf0f-658f8c51bfb9] Pending / Ready:ContainersNotReady (containers with unready status: [tiller]) / ContainersReady:ContainersNotReady (containers with unready status: [tiller])
	I0805 22:50:54.005475   18059 system_pods.go:126] duration metric: took 27.590484ms to wait for k8s-apps to be running ...
	I0805 22:50:54.005490   18059 system_svc.go:44] waiting for kubelet service to be running ....
	I0805 22:50:54.005538   18059 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0805 22:50:54.017525   18059 addons.go:431] installing /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I0805 22:50:54.017547   18059 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/gcp-auth-webhook.yaml (5421 bytes)
	I0805 22:50:54.080589   18059 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/gcp-auth-ns.yaml -f /etc/kubernetes/addons/gcp-auth-service.yaml -f /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I0805 22:50:54.392993   18059 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0805 22:50:54.394879   18059 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0805 22:50:54.411308   18059 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0805 22:50:54.886746   18059 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0805 22:50:54.894681   18059 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0805 22:50:54.898162   18059 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0805 22:50:55.384553   18059 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0805 22:50:55.395320   18059 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0805 22:50:55.398149   18059 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0805 22:50:55.450511   18059 ssh_runner.go:235] Completed: sudo systemctl is-active --quiet service kubelet: (1.444950765s)
	I0805 22:50:55.450546   18059 system_svc.go:56] duration metric: took 1.445055617s WaitForService to wait for kubelet
	I0805 22:50:55.450558   18059 kubeadm.go:582] duration metric: took 11.773374958s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0805 22:50:55.450577   18059 node_conditions.go:102] verifying NodePressure condition ...
	I0805 22:50:55.450509   18059 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (2.273605615s)
	I0805 22:50:55.450656   18059 main.go:141] libmachine: Making call to close driver server
	I0805 22:50:55.450670   18059 main.go:141] libmachine: (addons-435364) Calling .Close
	I0805 22:50:55.450931   18059 main.go:141] libmachine: Successfully made call to close driver server
	I0805 22:50:55.450944   18059 main.go:141] libmachine: Making call to close connection to plugin binary
	I0805 22:50:55.450952   18059 main.go:141] libmachine: Making call to close driver server
	I0805 22:50:55.450959   18059 main.go:141] libmachine: (addons-435364) Calling .Close
	I0805 22:50:55.451218   18059 main.go:141] libmachine: (addons-435364) DBG | Closing plugin on server side
	I0805 22:50:55.451278   18059 main.go:141] libmachine: Successfully made call to close driver server
	I0805 22:50:55.451293   18059 main.go:141] libmachine: Making call to close connection to plugin binary
	I0805 22:50:55.453813   18059 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0805 22:50:55.453833   18059 node_conditions.go:123] node cpu capacity is 2
	I0805 22:50:55.453844   18059 node_conditions.go:105] duration metric: took 3.261865ms to run NodePressure ...
	I0805 22:50:55.453855   18059 start.go:241] waiting for startup goroutines ...
	I0805 22:50:55.794206   18059 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/gcp-auth-ns.yaml -f /etc/kubernetes/addons/gcp-auth-service.yaml -f /etc/kubernetes/addons/gcp-auth-webhook.yaml: (1.713578108s)
	I0805 22:50:55.794258   18059 main.go:141] libmachine: Making call to close driver server
	I0805 22:50:55.794270   18059 main.go:141] libmachine: (addons-435364) Calling .Close
	I0805 22:50:55.794523   18059 main.go:141] libmachine: Successfully made call to close driver server
	I0805 22:50:55.794571   18059 main.go:141] libmachine: Making call to close connection to plugin binary
	I0805 22:50:55.794592   18059 main.go:141] libmachine: Making call to close driver server
	I0805 22:50:55.794614   18059 main.go:141] libmachine: (addons-435364) Calling .Close
	I0805 22:50:55.794845   18059 main.go:141] libmachine: Successfully made call to close driver server
	I0805 22:50:55.794864   18059 main.go:141] libmachine: Making call to close connection to plugin binary
	I0805 22:50:55.796735   18059 addons.go:475] Verifying addon gcp-auth=true in "addons-435364"
	I0805 22:50:55.799779   18059 out.go:177] * Verifying gcp-auth addon...
	I0805 22:50:55.801747   18059 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=gcp-auth" in ns "gcp-auth" ...
	I0805 22:50:55.834203   18059 kapi.go:86] Found 1 Pods for label selector kubernetes.io/minikube-addons=gcp-auth
	I0805 22:50:55.834227   18059 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0805 22:50:55.903132   18059 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0805 22:50:55.932737   18059 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0805 22:50:55.942611   18059 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0805 22:50:56.306547   18059 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0805 22:50:56.390604   18059 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0805 22:50:56.401418   18059 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0805 22:50:56.407342   18059 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0805 22:50:56.805992   18059 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0805 22:50:56.886789   18059 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0805 22:50:56.895416   18059 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0805 22:50:56.897303   18059 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0805 22:50:57.305792   18059 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0805 22:50:57.385742   18059 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0805 22:50:57.394163   18059 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0805 22:50:57.396395   18059 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0805 22:50:57.805904   18059 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0805 22:50:57.884355   18059 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0805 22:50:57.895240   18059 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0805 22:50:57.897766   18059 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0805 22:50:58.306415   18059 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0805 22:50:58.384804   18059 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0805 22:50:58.396291   18059 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0805 22:50:58.398580   18059 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0805 22:50:58.805970   18059 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0805 22:50:58.884085   18059 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0805 22:50:58.894366   18059 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0805 22:50:58.897088   18059 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0805 22:50:59.305934   18059 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0805 22:50:59.383822   18059 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0805 22:50:59.394349   18059 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0805 22:50:59.397241   18059 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0805 22:50:59.806057   18059 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0805 22:50:59.885080   18059 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0805 22:50:59.896240   18059 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0805 22:50:59.899452   18059 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0805 22:51:00.306082   18059 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0805 22:51:00.383603   18059 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0805 22:51:00.394623   18059 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0805 22:51:00.397443   18059 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0805 22:51:00.806127   18059 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0805 22:51:00.885914   18059 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0805 22:51:00.895131   18059 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0805 22:51:00.897258   18059 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0805 22:51:01.306509   18059 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0805 22:51:01.388270   18059 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0805 22:51:01.394363   18059 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0805 22:51:01.397262   18059 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0805 22:51:01.808485   18059 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0805 22:51:01.884527   18059 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0805 22:51:01.902309   18059 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0805 22:51:01.902588   18059 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0805 22:51:02.306605   18059 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0805 22:51:02.383639   18059 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0805 22:51:02.394871   18059 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0805 22:51:02.396821   18059 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0805 22:51:02.806240   18059 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0805 22:51:02.884561   18059 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0805 22:51:02.894971   18059 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0805 22:51:02.898247   18059 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0805 22:51:03.306214   18059 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0805 22:51:03.384310   18059 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0805 22:51:03.394216   18059 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0805 22:51:03.397170   18059 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0805 22:51:03.805860   18059 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0805 22:51:03.884825   18059 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0805 22:51:03.894312   18059 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0805 22:51:03.896729   18059 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0805 22:51:04.305494   18059 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0805 22:51:04.384412   18059 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0805 22:51:04.394998   18059 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0805 22:51:04.397049   18059 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0805 22:51:04.806006   18059 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0805 22:51:04.884064   18059 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0805 22:51:04.896044   18059 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0805 22:51:04.897845   18059 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0805 22:51:05.341303   18059 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0805 22:51:05.385348   18059 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0805 22:51:05.395810   18059 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0805 22:51:05.399777   18059 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0805 22:51:05.806575   18059 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0805 22:51:05.883549   18059 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0805 22:51:05.894481   18059 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0805 22:51:05.897454   18059 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0805 22:51:06.306225   18059 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0805 22:51:06.384643   18059 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0805 22:51:06.394734   18059 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0805 22:51:06.397597   18059 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0805 22:51:06.806520   18059 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0805 22:51:06.883665   18059 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0805 22:51:06.894596   18059 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0805 22:51:06.897788   18059 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0805 22:51:07.305277   18059 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0805 22:51:07.384022   18059 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0805 22:51:07.395206   18059 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0805 22:51:07.397407   18059 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0805 22:51:07.806364   18059 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0805 22:51:07.884540   18059 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0805 22:51:07.898444   18059 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0805 22:51:07.899788   18059 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0805 22:51:08.612555   18059 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0805 22:51:08.613305   18059 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0805 22:51:08.614720   18059 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0805 22:51:08.614854   18059 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0805 22:51:08.806474   18059 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0805 22:51:08.884429   18059 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0805 22:51:08.895315   18059 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0805 22:51:08.898310   18059 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0805 22:51:09.305704   18059 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0805 22:51:09.385670   18059 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0805 22:51:09.401729   18059 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0805 22:51:09.402064   18059 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0805 22:51:09.806476   18059 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0805 22:51:09.884497   18059 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0805 22:51:09.896336   18059 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0805 22:51:09.898435   18059 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0805 22:51:10.305702   18059 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0805 22:51:10.384393   18059 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0805 22:51:10.395517   18059 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0805 22:51:10.398100   18059 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0805 22:51:10.809398   18059 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0805 22:51:10.885465   18059 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0805 22:51:10.894437   18059 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0805 22:51:10.896692   18059 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0805 22:51:11.305941   18059 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0805 22:51:11.385034   18059 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0805 22:51:11.395883   18059 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0805 22:51:11.398653   18059 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0805 22:51:11.805999   18059 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0805 22:51:11.886682   18059 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0805 22:51:11.895544   18059 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0805 22:51:11.898824   18059 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0805 22:51:12.306306   18059 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0805 22:51:12.384435   18059 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0805 22:51:12.394167   18059 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0805 22:51:12.397768   18059 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0805 22:51:12.806522   18059 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0805 22:51:12.893284   18059 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0805 22:51:12.894868   18059 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0805 22:51:12.896949   18059 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0805 22:51:13.305381   18059 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0805 22:51:13.384501   18059 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0805 22:51:13.397850   18059 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0805 22:51:13.399347   18059 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0805 22:51:13.806393   18059 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0805 22:51:13.888535   18059 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0805 22:51:13.898330   18059 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0805 22:51:13.903215   18059 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0805 22:51:14.305707   18059 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0805 22:51:14.388878   18059 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0805 22:51:14.395229   18059 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0805 22:51:14.397409   18059 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0805 22:51:14.805711   18059 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0805 22:51:14.883785   18059 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0805 22:51:14.894796   18059 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0805 22:51:14.897125   18059 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0805 22:51:15.305414   18059 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0805 22:51:15.385853   18059 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0805 22:51:15.398056   18059 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0805 22:51:15.403305   18059 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0805 22:51:15.806066   18059 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0805 22:51:15.884969   18059 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0805 22:51:15.894640   18059 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0805 22:51:15.897551   18059 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0805 22:51:16.306151   18059 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0805 22:51:16.385209   18059 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0805 22:51:16.395404   18059 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0805 22:51:16.398952   18059 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0805 22:51:16.805488   18059 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0805 22:51:16.885464   18059 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0805 22:51:16.894695   18059 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0805 22:51:16.897200   18059 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0805 22:51:17.305736   18059 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0805 22:51:17.384034   18059 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0805 22:51:17.396956   18059 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0805 22:51:17.401186   18059 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0805 22:51:17.805246   18059 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0805 22:51:17.884517   18059 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0805 22:51:17.894133   18059 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0805 22:51:17.912224   18059 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0805 22:51:18.305247   18059 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0805 22:51:18.384624   18059 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0805 22:51:18.394587   18059 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0805 22:51:18.397099   18059 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0805 22:51:19.037271   18059 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0805 22:51:19.049127   18059 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0805 22:51:19.049538   18059 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0805 22:51:19.049735   18059 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0805 22:51:19.305888   18059 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0805 22:51:19.390106   18059 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0805 22:51:19.395313   18059 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0805 22:51:19.400097   18059 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0805 22:51:19.807446   18059 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0805 22:51:19.885778   18059 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0805 22:51:19.894711   18059 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0805 22:51:19.897488   18059 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0805 22:51:20.305572   18059 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0805 22:51:20.383749   18059 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0805 22:51:20.394706   18059 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0805 22:51:20.397335   18059 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0805 22:51:20.806240   18059 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0805 22:51:20.883812   18059 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0805 22:51:20.897744   18059 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0805 22:51:20.898616   18059 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0805 22:51:21.305809   18059 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0805 22:51:21.383876   18059 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0805 22:51:21.394573   18059 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0805 22:51:21.397445   18059 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0805 22:51:21.806743   18059 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0805 22:51:21.883905   18059 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0805 22:51:21.894656   18059 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0805 22:51:21.897620   18059 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0805 22:51:22.307789   18059 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0805 22:51:22.387150   18059 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0805 22:51:22.395521   18059 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0805 22:51:22.399635   18059 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0805 22:51:22.805432   18059 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0805 22:51:22.885431   18059 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0805 22:51:22.894200   18059 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0805 22:51:22.896770   18059 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0805 22:51:23.306572   18059 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0805 22:51:23.384162   18059 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0805 22:51:23.394312   18059 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0805 22:51:23.397350   18059 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0805 22:51:23.805560   18059 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0805 22:51:23.883998   18059 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0805 22:51:23.895242   18059 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0805 22:51:23.897791   18059 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0805 22:51:24.305410   18059 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0805 22:51:24.390389   18059 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0805 22:51:24.402620   18059 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0805 22:51:24.403548   18059 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0805 22:51:24.806315   18059 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0805 22:51:24.885108   18059 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0805 22:51:24.896347   18059 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0805 22:51:24.898601   18059 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0805 22:51:25.306200   18059 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0805 22:51:25.383864   18059 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0805 22:51:25.394399   18059 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0805 22:51:25.396878   18059 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0805 22:51:25.806331   18059 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0805 22:51:25.884373   18059 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0805 22:51:25.894336   18059 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0805 22:51:25.897200   18059 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0805 22:51:26.305705   18059 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0805 22:51:26.383474   18059 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0805 22:51:26.394150   18059 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0805 22:51:26.396775   18059 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0805 22:51:26.805259   18059 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0805 22:51:26.884164   18059 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0805 22:51:26.894284   18059 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0805 22:51:26.897709   18059 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0805 22:51:27.306208   18059 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0805 22:51:27.384186   18059 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0805 22:51:27.394719   18059 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0805 22:51:27.398107   18059 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0805 22:51:27.806016   18059 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0805 22:51:27.884377   18059 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0805 22:51:27.895410   18059 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0805 22:51:27.897724   18059 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0805 22:51:28.306244   18059 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0805 22:51:28.384230   18059 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0805 22:51:28.395182   18059 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0805 22:51:28.397617   18059 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0805 22:51:28.805976   18059 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0805 22:51:28.884587   18059 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0805 22:51:28.894467   18059 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0805 22:51:28.897541   18059 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0805 22:51:29.314019   18059 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0805 22:51:29.384655   18059 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0805 22:51:29.397912   18059 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0805 22:51:29.400460   18059 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0805 22:51:29.806481   18059 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0805 22:51:29.885308   18059 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0805 22:51:29.894811   18059 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0805 22:51:29.899335   18059 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0805 22:51:30.305822   18059 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0805 22:51:30.384080   18059 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0805 22:51:30.394942   18059 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0805 22:51:30.397585   18059 kapi.go:107] duration metric: took 37.504870684s to wait for kubernetes.io/minikube-addons=registry ...
	I0805 22:51:30.807376   18059 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0805 22:51:30.885233   18059 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0805 22:51:30.894780   18059 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0805 22:51:31.313958   18059 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0805 22:51:31.384121   18059 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0805 22:51:31.394513   18059 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0805 22:51:31.807888   18059 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0805 22:51:32.359569   18059 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0805 22:51:32.359775   18059 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0805 22:51:32.362224   18059 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0805 22:51:32.385144   18059 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0805 22:51:32.395115   18059 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0805 22:51:32.805627   18059 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0805 22:51:32.883314   18059 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0805 22:51:32.894292   18059 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0805 22:51:33.305479   18059 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0805 22:51:33.385579   18059 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0805 22:51:33.394246   18059 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0805 22:51:33.805943   18059 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0805 22:51:33.885803   18059 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0805 22:51:33.894437   18059 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0805 22:51:34.305814   18059 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0805 22:51:34.384002   18059 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0805 22:51:34.394560   18059 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0805 22:51:34.807444   18059 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0805 22:51:34.887484   18059 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0805 22:51:34.895801   18059 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0805 22:51:35.307080   18059 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0805 22:51:35.384644   18059 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0805 22:51:35.395485   18059 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0805 22:51:35.805383   18059 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0805 22:51:35.884967   18059 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0805 22:51:35.894867   18059 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0805 22:51:36.305982   18059 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0805 22:51:36.384314   18059 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0805 22:51:36.394531   18059 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0805 22:51:36.805744   18059 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0805 22:51:36.884074   18059 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0805 22:51:36.895116   18059 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0805 22:51:37.304897   18059 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0805 22:51:37.383852   18059 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0805 22:51:37.394243   18059 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0805 22:51:37.805883   18059 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0805 22:51:37.884517   18059 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0805 22:51:37.895892   18059 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0805 22:51:38.309940   18059 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0805 22:51:38.384638   18059 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0805 22:51:38.395478   18059 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0805 22:51:38.805738   18059 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0805 22:51:38.884458   18059 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0805 22:51:38.894864   18059 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0805 22:51:39.306344   18059 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0805 22:51:39.391115   18059 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0805 22:51:39.394841   18059 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0805 22:51:39.805822   18059 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0805 22:51:39.883781   18059 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0805 22:51:39.894822   18059 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0805 22:51:40.305893   18059 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0805 22:51:40.383837   18059 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0805 22:51:40.398728   18059 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0805 22:51:40.806976   18059 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0805 22:51:40.884613   18059 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0805 22:51:40.895114   18059 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0805 22:51:41.305663   18059 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0805 22:51:41.383595   18059 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0805 22:51:41.394650   18059 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0805 22:51:41.807174   18059 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0805 22:51:41.884087   18059 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0805 22:51:41.894695   18059 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0805 22:51:42.306238   18059 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0805 22:51:42.384375   18059 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0805 22:51:42.394457   18059 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0805 22:51:42.805340   18059 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0805 22:51:42.884910   18059 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0805 22:51:42.894508   18059 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0805 22:51:43.305576   18059 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0805 22:51:43.383200   18059 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0805 22:51:43.395061   18059 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0805 22:51:43.806098   18059 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0805 22:51:43.883258   18059 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0805 22:51:43.894220   18059 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0805 22:51:44.305942   18059 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0805 22:51:44.384162   18059 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0805 22:51:44.394656   18059 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0805 22:51:44.806810   18059 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0805 22:51:44.883693   18059 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0805 22:51:44.895213   18059 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0805 22:51:45.306833   18059 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0805 22:51:45.384061   18059 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0805 22:51:45.394899   18059 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0805 22:51:45.807504   18059 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0805 22:51:46.129855   18059 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0805 22:51:46.131947   18059 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0805 22:51:46.305715   18059 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0805 22:51:46.383716   18059 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0805 22:51:46.394577   18059 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0805 22:51:46.806002   18059 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0805 22:51:46.884346   18059 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0805 22:51:46.894379   18059 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0805 22:51:47.306586   18059 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0805 22:51:47.383251   18059 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0805 22:51:47.394234   18059 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0805 22:51:47.805720   18059 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0805 22:51:48.082709   18059 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0805 22:51:48.087609   18059 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0805 22:51:48.306776   18059 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0805 22:51:48.384401   18059 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0805 22:51:48.394160   18059 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0805 22:51:48.808404   18059 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0805 22:51:48.883685   18059 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0805 22:51:48.895691   18059 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0805 22:51:49.306876   18059 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0805 22:51:49.385465   18059 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0805 22:51:49.394231   18059 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0805 22:51:49.805160   18059 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0805 22:51:49.887676   18059 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0805 22:51:49.894784   18059 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0805 22:51:50.310908   18059 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0805 22:51:50.383748   18059 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0805 22:51:50.394639   18059 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0805 22:51:50.805239   18059 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0805 22:51:50.884114   18059 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0805 22:51:50.893793   18059 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0805 22:51:51.305672   18059 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0805 22:51:51.383602   18059 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0805 22:51:51.394641   18059 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0805 22:51:51.805988   18059 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0805 22:51:51.887536   18059 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0805 22:51:51.902685   18059 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0805 22:51:52.306039   18059 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0805 22:51:52.384024   18059 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0805 22:51:52.394623   18059 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0805 22:51:52.805811   18059 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0805 22:51:52.884179   18059 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0805 22:51:52.894969   18059 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0805 22:51:53.305733   18059 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0805 22:51:53.383078   18059 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0805 22:51:53.395070   18059 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0805 22:51:53.805951   18059 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0805 22:51:53.887517   18059 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0805 22:51:53.895745   18059 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0805 22:51:54.305340   18059 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0805 22:51:54.384330   18059 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0805 22:51:54.393760   18059 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0805 22:51:54.805872   18059 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0805 22:51:54.883612   18059 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0805 22:51:54.894660   18059 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0805 22:51:55.305886   18059 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0805 22:51:55.383765   18059 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0805 22:51:55.398614   18059 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0805 22:51:55.806221   18059 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0805 22:51:55.884335   18059 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0805 22:51:55.896724   18059 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0805 22:51:56.306877   18059 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0805 22:51:56.385288   18059 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0805 22:51:56.394114   18059 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0805 22:51:56.806187   18059 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0805 22:51:56.885864   18059 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0805 22:51:56.894560   18059 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0805 22:51:57.305681   18059 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0805 22:51:57.383579   18059 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0805 22:51:57.394190   18059 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0805 22:51:57.818796   18059 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0805 22:51:57.890635   18059 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0805 22:51:57.896128   18059 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0805 22:51:58.306682   18059 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0805 22:51:58.383354   18059 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0805 22:51:58.397160   18059 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0805 22:51:58.806015   18059 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0805 22:51:58.885013   18059 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0805 22:51:58.901034   18059 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0805 22:51:59.308018   18059 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0805 22:51:59.390002   18059 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0805 22:51:59.395691   18059 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0805 22:51:59.805572   18059 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0805 22:51:59.884508   18059 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0805 22:51:59.900317   18059 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0805 22:52:00.305127   18059 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0805 22:52:00.384436   18059 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0805 22:52:00.395727   18059 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0805 22:52:00.805856   18059 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0805 22:52:00.883890   18059 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0805 22:52:00.895013   18059 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0805 22:52:01.306318   18059 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0805 22:52:01.384006   18059 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0805 22:52:01.395015   18059 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0805 22:52:01.808363   18059 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0805 22:52:01.885672   18059 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0805 22:52:01.897782   18059 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0805 22:52:02.310374   18059 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0805 22:52:02.385578   18059 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0805 22:52:02.394611   18059 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0805 22:52:02.805531   18059 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0805 22:52:02.887489   18059 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0805 22:52:02.894862   18059 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0805 22:52:03.305375   18059 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0805 22:52:03.384177   18059 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0805 22:52:03.394997   18059 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0805 22:52:03.805843   18059 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0805 22:52:03.884560   18059 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0805 22:52:03.895411   18059 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0805 22:52:04.306689   18059 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0805 22:52:04.383989   18059 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0805 22:52:04.395551   18059 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0805 22:52:04.807060   18059 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0805 22:52:04.883702   18059 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0805 22:52:04.896549   18059 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0805 22:52:05.306335   18059 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0805 22:52:05.384684   18059 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0805 22:52:05.394251   18059 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0805 22:52:05.806270   18059 kapi.go:107] duration metric: took 1m10.004521145s to wait for kubernetes.io/minikube-addons=gcp-auth ...
	I0805 22:52:05.808431   18059 out.go:177] * Your GCP credentials will now be mounted into every pod created in the addons-435364 cluster.
	I0805 22:52:05.810077   18059 out.go:177] * If you don't want your credentials mounted into a specific pod, add a label with the `gcp-auth-skip-secret` key to your pod configuration.
	I0805 22:52:05.811722   18059 out.go:177] * If you want existing pods to be mounted with credentials, either recreate them or rerun addons enable with --refresh.
	I0805 22:52:05.884206   18059 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0805 22:52:05.894286   18059 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0805 22:52:06.384605   18059 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0805 22:52:06.394931   18059 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0805 22:52:06.883342   18059 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0805 22:52:06.894006   18059 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0805 22:52:07.383367   18059 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0805 22:52:07.394125   18059 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0805 22:52:07.884372   18059 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0805 22:52:07.894024   18059 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0805 22:52:08.383239   18059 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0805 22:52:08.397905   18059 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0805 22:52:08.888204   18059 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0805 22:52:08.898670   18059 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0805 22:52:09.383352   18059 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0805 22:52:09.393961   18059 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0805 22:52:09.883846   18059 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0805 22:52:09.895117   18059 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0805 22:52:10.384131   18059 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0805 22:52:10.394894   18059 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0805 22:52:10.884399   18059 kapi.go:107] duration metric: took 1m17.00612132s to wait for kubernetes.io/minikube-addons=csi-hostpath-driver ...
	I0805 22:52:10.895660   18059 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0805 22:52:11.394819   18059 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0805 22:52:11.895020   18059 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0805 22:52:12.394588   18059 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0805 22:52:12.896109   18059 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0805 22:52:13.394917   18059 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0805 22:52:13.894462   18059 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0805 22:52:14.395784   18059 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0805 22:52:14.895594   18059 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0805 22:52:15.395321   18059 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0805 22:52:15.896618   18059 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0805 22:52:16.394467   18059 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0805 22:52:16.896216   18059 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0805 22:52:17.395260   18059 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0805 22:52:17.895321   18059 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0805 22:52:18.395383   18059 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0805 22:52:18.896662   18059 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0805 22:52:19.395617   18059 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0805 22:52:19.895654   18059 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0805 22:52:20.395246   18059 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0805 22:52:20.895134   18059 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0805 22:52:21.394987   18059 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0805 22:52:21.895183   18059 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0805 22:52:22.396341   18059 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0805 22:52:22.895517   18059 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0805 22:52:23.395590   18059 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0805 22:52:23.895734   18059 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0805 22:52:24.394627   18059 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0805 22:52:24.894767   18059 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0805 22:52:25.394793   18059 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0805 22:52:25.894982   18059 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0805 22:52:26.397326   18059 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0805 22:52:26.895154   18059 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0805 22:52:27.394989   18059 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0805 22:52:27.894472   18059 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0805 22:52:28.395384   18059 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0805 22:52:28.896093   18059 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0805 22:52:29.395345   18059 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0805 22:52:29.895178   18059 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0805 22:52:30.395134   18059 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0805 22:52:30.895654   18059 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0805 22:52:31.396110   18059 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0805 22:52:31.896298   18059 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0805 22:52:32.395262   18059 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0805 22:52:32.897391   18059 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0805 22:52:33.395534   18059 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0805 22:52:33.900137   18059 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0805 22:52:34.394721   18059 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0805 22:52:34.894814   18059 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0805 22:52:35.394753   18059 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0805 22:52:35.894680   18059 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0805 22:52:36.395539   18059 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0805 22:52:36.896620   18059 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0805 22:52:37.396457   18059 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0805 22:52:37.895646   18059 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0805 22:52:38.400663   18059 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0805 22:52:38.895311   18059 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0805 22:52:39.395320   18059 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0805 22:52:39.894839   18059 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0805 22:52:40.394210   18059 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0805 22:52:40.895893   18059 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0805 22:52:41.394834   18059 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0805 22:52:41.894918   18059 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0805 22:52:42.395350   18059 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0805 22:52:42.895411   18059 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0805 22:52:43.395832   18059 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0805 22:52:43.895494   18059 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0805 22:52:44.395390   18059 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0805 22:52:44.895255   18059 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0805 22:52:45.395475   18059 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0805 22:52:45.896181   18059 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0805 22:52:46.395501   18059 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0805 22:52:46.896480   18059 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0805 22:52:47.395515   18059 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0805 22:52:47.896059   18059 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0805 22:52:48.395520   18059 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0805 22:52:48.895063   18059 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0805 22:52:49.395490   18059 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0805 22:52:49.895735   18059 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0805 22:52:50.395484   18059 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0805 22:52:50.896322   18059 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0805 22:52:51.396308   18059 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0805 22:52:51.897273   18059 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0805 22:52:52.397243   18059 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0805 22:52:52.895585   18059 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0805 22:52:53.396134   18059 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0805 22:52:53.895692   18059 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0805 22:52:54.394595   18059 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0805 22:52:54.895406   18059 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0805 22:52:55.395412   18059 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0805 22:52:55.896554   18059 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0805 22:52:56.395473   18059 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0805 22:52:56.895670   18059 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0805 22:52:57.394295   18059 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0805 22:52:57.895319   18059 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0805 22:52:58.396723   18059 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0805 22:52:58.895672   18059 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0805 22:52:59.396206   18059 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0805 22:52:59.895335   18059 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0805 22:53:00.399317   18059 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0805 22:53:00.895510   18059 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0805 22:53:01.395903   18059 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0805 22:53:01.896217   18059 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0805 22:53:02.397569   18059 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0805 22:53:02.896826   18059 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0805 22:53:03.394988   18059 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0805 22:53:03.895818   18059 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0805 22:53:04.395455   18059 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0805 22:53:04.895831   18059 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0805 22:53:05.395761   18059 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0805 22:53:05.894988   18059 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0805 22:53:06.395045   18059 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0805 22:53:06.895066   18059 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0805 22:53:07.395251   18059 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0805 22:53:07.895282   18059 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0805 22:53:08.395925   18059 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0805 22:53:08.896715   18059 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0805 22:53:09.395483   18059 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0805 22:53:09.895510   18059 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0805 22:53:10.395245   18059 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0805 22:53:10.894313   18059 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0805 22:53:11.831692   18059 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0805 22:53:11.895687   18059 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0805 22:53:12.395448   18059 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0805 22:53:12.896912   18059 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0805 22:53:13.394476   18059 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0805 22:53:13.896393   18059 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0805 22:53:14.549979   18059 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0805 22:53:14.896068   18059 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0805 22:53:15.396987   18059 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0805 22:53:15.895327   18059 kapi.go:107] duration metric: took 2m23.004955809s to wait for app.kubernetes.io/name=ingress-nginx ...
	I0805 22:53:15.897040   18059 out.go:177] * Enabled addons: ingress-dns, storage-provisioner, storage-provisioner-rancher, helm-tiller, metrics-server, inspektor-gadget, nvidia-device-plugin, cloud-spanner, yakd, volumesnapshots, registry, gcp-auth, csi-hostpath-driver, ingress
	I0805 22:53:15.898514   18059 addons.go:510] duration metric: took 2m32.221311776s for enable addons: enabled=[ingress-dns storage-provisioner storage-provisioner-rancher helm-tiller metrics-server inspektor-gadget nvidia-device-plugin cloud-spanner yakd volumesnapshots registry gcp-auth csi-hostpath-driver ingress]
	I0805 22:53:15.898554   18059 start.go:246] waiting for cluster config update ...
	I0805 22:53:15.898577   18059 start.go:255] writing updated cluster config ...
	I0805 22:53:15.898818   18059 ssh_runner.go:195] Run: rm -f paused
	I0805 22:53:15.950673   18059 start.go:600] kubectl: 1.30.3, cluster: 1.30.3 (minor skew: 0)
	I0805 22:53:15.952813   18059 out.go:177] * Done! kubectl is now configured to use "addons-435364" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	Aug 05 22:57:13 addons-435364 crio[676]: time="2024-08-05 22:57:13.833333918Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1722898633833298863,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:589584,},InodesUsed:&UInt64Value{Value:210,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=585ddcee-1597-438a-b8d7-8294bb285e06 name=/runtime.v1.ImageService/ImageFsInfo
	Aug 05 22:57:13 addons-435364 crio[676]: time="2024-08-05 22:57:13.834033114Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=df6119f9-e303-4f09-8b9d-c4f583faf697 name=/runtime.v1.RuntimeService/ListContainers
	Aug 05 22:57:13 addons-435364 crio[676]: time="2024-08-05 22:57:13.834111591Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=df6119f9-e303-4f09-8b9d-c4f583faf697 name=/runtime.v1.RuntimeService/ListContainers
	Aug 05 22:57:13 addons-435364 crio[676]: time="2024-08-05 22:57:13.834466946Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:9dce877f2c362a263adc22fa7c1dff8aa7deca2278b49b2cc88d482a8b6b4d04,PodSandboxId:8fd827d106ec2d0907c305fa69adb920d81b4078582840c0168b625b01ffa0a0,Metadata:&ContainerMetadata{Name:hello-world-app,Attempt:0,},Image:&ImageSpec{Image:docker.io/kicbase/echo-server@sha256:127ac38a2bb9537b7f252addff209ea6801edcac8a92c8b1104dacd66a583ed6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9056ab77afb8e18e04303f11000a9d31b3f16b74c59475b899ae1b342d328d30,State:CONTAINER_RUNNING,CreatedAt:1722898627258550448,Labels:map[string]string{io.kubernetes.container.name: hello-world-app,io.kubernetes.pod.name: hello-world-app-6778b5fc9f-nbsh9,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 18dc8ba2-00d5-49a3-891c-7e66fff40039,},Annotations:map[string]string{io.kubernetes.container.hash: 5f2dd573,io.kubernetes.container.
ports: [{\"containerPort\":8080,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:30e682946ae982b910587c4dfd32ee4b18fb9be6ffc0c0ed2c73c3bcaccab5b3,PodSandboxId:191ee1227f0613fe15909c9265bcdf71f2df55c0514d55c7c442ab0cc2dd6591,Metadata:&ContainerMetadata{Name:nginx,Attempt:0,},Image:&ImageSpec{Image:docker.io/library/nginx@sha256:208b70eefac13ee9be00e486f79c695b15cef861c680527171a27d253d834be9,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1ae23480369fa4139f6dec668d7a5a941b56ea174e9cf75e09771988fe621c95,State:CONTAINER_RUNNING,CreatedAt:1722898488055385763,Labels:map[string]string{io.kubernetes.container.name: nginx,io.kubernetes.pod.name: nginx,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: f96f1bbf-3982-41a3-94f0-5cab0827ddb3,},Annotations:map[string]string{io.kubernet
es.container.hash: cfbed574,io.kubernetes.container.ports: [{\"containerPort\":80,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:279179149c33b3043b28d0af3a8612082ccf6cd6248319270f1dfdc7fc567211,PodSandboxId:3231f54f5336a24e2ef0cf19c8327249775ae9e3c236f930ed00e3ef1110ed36,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c,State:CONTAINER_RUNNING,CreatedAt:1722898399559693191,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 973bc6a0-8c5b-48a5-a
795-cc389f59d219,},Annotations:map[string]string{io.kubernetes.container.hash: 2b7f493d,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e99dd1415b76169a8c5445723cbdd7ed97fdcb7634e1df69cd4bfbe931586e5c,PodSandboxId:ed92c770dec5bd3889ce97e88ed8be9ff693fdb8eea77b512ea1db97a0a8dadf,Metadata:&ContainerMetadata{Name:patch,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:35379defc3e7025b1c00d37092f560ce87d06ea5ab35d04ff8a0cf22d316bcf2,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:684c5ea3b61b299cd4e713c10bfd8989341da91f6175e2e6e502869c0781fb66,State:CONTAINER_EXITED,CreatedAt:1722898317737918391,Labels:map[string]string{io.kubernetes.container.name: patch,io.kubernetes.pod.name: ingress-nginx-admission-patch-dsnb7,io.kubernetes.pod.namespace: ingress-nginx,io.kuber
netes.pod.uid: 56bfd78b-7481-4a9f-879d-0bcdbcf050cd,},Annotations:map[string]string{io.kubernetes.container.hash: 667cbb1b,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:85cf8d86410bef1c025ffe59434653e15360767265658f89ea62a0c43a9e5ca2,PodSandboxId:1f72d7f927f94469b7d55eb5cea7e2f8d6ef1452d24145b366e463af6ae59884,Metadata:&ContainerMetadata{Name:create,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:35379defc3e7025b1c00d37092f560ce87d06ea5ab35d04ff8a0cf22d316bcf2,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:684c5ea3b61b299cd4e713c10bfd8989341da91f6175e2e6e502869c0781fb66,State:CONTAINER_EXITED,CreatedAt:1722898317639582895,Labels:map[string]string{io.kubernetes.container.name: create,io.kubernetes.pod.name: ingress-nginx-admission-create-wp8kk,io.kubernetes
.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: ae26cc1e-446c-4ab4-8cb1-7719e5cfb06f,},Annotations:map[string]string{io.kubernetes.container.hash: ce20c556,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:431e42b0b0b158402f10cb7b93827a107055987140e4dce351b570dc3f93facd,PodSandboxId:b020eb62a7b0f24faa5795c0c3d3869f774509a7eecdb34c96cfb5f299c3babf,Metadata:&ContainerMetadata{Name:metrics-server,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/metrics-server/metrics-server@sha256:31f034feb3f16062e93be7c40efc596553c89de172e2e412e588f02382388872,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a24c7c057ec8730aaa152f77366454835a46dc699fcf243698a622788fd48d62,State:CONTAINER_RUNNING,CreatedAt:1722898302555177776,Labels:map[string]string{io.kubernetes.container.name: metrics-server,io.kubernetes.pod.name:
metrics-server-c59844bb4-m9t52,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f825462d-de15-4aa7-9436-76eda3bbd66f,},Annotations:map[string]string{io.kubernetes.container.hash: 62e4cceb,io.kubernetes.container.ports: [{\"name\":\"https\",\"containerPort\":4443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7b5c994323214402a42053a26dbdf6aaa73eeb251beee1a898876e1c323893d5,PodSandboxId:3129664cc0275e797e40cebd7629f9013b4ad7642ad0896ad1cef672c78146a5,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1722898250327121182,La
bels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: cfbc5ee9-491f-4c8d-aecc-72ba061092ec,},Annotations:map[string]string{io.kubernetes.container.hash: 1c0a402c,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0abdbd3ed10f077f41965f1ab420f42938621e8dbc61df531790ac2ee7e9c40e,PodSandboxId:1055162b97d8516c24dc2a85ed57c9facac9405e2411fd71e3390b11f0b160b4,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1722898246723247613,Labels:map[string]string{i
o.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-ng8rk,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2091f1e9-b1aa-45fd-8197-0f661fcf784e,},Annotations:map[string]string{io.kubernetes.container.hash: 89f0555d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ffd14a580eef1dd67f8e26cf09eeb41251619feba45e4ab0d12f7f5b32879188,PodSandboxId:22bb8d21f29a210bd60addbae54caa6a518370f2cb4e18a6e41d2c21019b1d38,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,Annotations:map[string]str
ing{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,State:CONTAINER_RUNNING,CreatedAt:1722898243472744671,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-lt8r2,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c1a7c99c-379f-4e2d-b241-4de97adffa76,},Annotations:map[string]string{io.kubernetes.container.hash: 3ab3dbc6,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e58d0c10af25f73f245cd49ac44d141e0b4dc75e8e4ac8995698b79ed373af5e,PodSandboxId:961be72ba0eb869239d98780834acef5b053ceeed32f94be162b3be2cf91ec70,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,Annotations:map[string]string{},UserSpecifiedImage:,Run
timeHandler:,},ImageRef:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,State:CONTAINER_RUNNING,CreatedAt:1722898224416768929,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-addons-435364,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 321c366bd160eeee564705797a7fc2fc,},Annotations:map[string]string{io.kubernetes.container.hash: 7337c8d9,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:de461982723232193cc406adb03555f3314162eaba4b5e3472d116ab53272189,PodSandboxId:b4dfc4b300ffc8791cc6d909cd97644db5094407db26e8ee6de5b4357f14ce25,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:386
1cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1722898224450036379,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-addons-435364,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0d200a2d8f14313b20affd7e51da4716,},Annotations:map[string]string{io.kubernetes.container.hash: f267c287,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b5b169a97f6f0fee85e8a3c58958ef344c63040a0d46d50b287ab5277d491e7d,PodSandboxId:f3b9318379ac35b248f9d1a079b0c94d03813100d40a7289914625df00dcf608,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:76932a3b37d7eb138c8f47c9a2b4218
f0466dd273badf856f2ce2f0277e15b5e,State:CONTAINER_RUNNING,CreatedAt:1722898224463679631,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-addons-435364,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 18812a5d71e8307dfae178321f661472,},Annotations:map[string]string{io.kubernetes.container.hash: bcb4918f,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:92eafd2fe5370e20300cf4b57a5758e16e3dee2bb64c465c25b601d07f7aa4c6,PodSandboxId:fef94c54938c430ddc6f396f0cac092b131d58fcf51ba251606475e1c80854d1,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1f6d574d502f3b61c851b1bbd4ef2a
964ce4c70071dd8da556f2d490d36b095d,State:CONTAINER_RUNNING,CreatedAt:1722898224394938513,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-addons-435364,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3913d430a2d94646f23a316dc2057cb3,},Annotations:map[string]string{io.kubernetes.container.hash: 9ffc2af7,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=df6119f9-e303-4f09-8b9d-c4f583faf697 name=/runtime.v1.RuntimeService/ListContainers
	Aug 05 22:57:13 addons-435364 crio[676]: time="2024-08-05 22:57:13.874479225Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=f03b91fd-a7db-4e5c-b39d-bb57978ddf8f name=/runtime.v1.RuntimeService/Version
	Aug 05 22:57:13 addons-435364 crio[676]: time="2024-08-05 22:57:13.874646214Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=f03b91fd-a7db-4e5c-b39d-bb57978ddf8f name=/runtime.v1.RuntimeService/Version
	Aug 05 22:57:13 addons-435364 crio[676]: time="2024-08-05 22:57:13.875799053Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=e943c534-257c-42ce-b77a-d6aedef35097 name=/runtime.v1.ImageService/ImageFsInfo
	Aug 05 22:57:13 addons-435364 crio[676]: time="2024-08-05 22:57:13.877242523Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1722898633877217388,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:589584,},InodesUsed:&UInt64Value{Value:210,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=e943c534-257c-42ce-b77a-d6aedef35097 name=/runtime.v1.ImageService/ImageFsInfo
	Aug 05 22:57:13 addons-435364 crio[676]: time="2024-08-05 22:57:13.878218535Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=b3265121-7f54-4e0c-a129-c0f2caac8df0 name=/runtime.v1.RuntimeService/ListContainers
	Aug 05 22:57:13 addons-435364 crio[676]: time="2024-08-05 22:57:13.878289886Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=b3265121-7f54-4e0c-a129-c0f2caac8df0 name=/runtime.v1.RuntimeService/ListContainers
	Aug 05 22:57:13 addons-435364 crio[676]: time="2024-08-05 22:57:13.878703753Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:9dce877f2c362a263adc22fa7c1dff8aa7deca2278b49b2cc88d482a8b6b4d04,PodSandboxId:8fd827d106ec2d0907c305fa69adb920d81b4078582840c0168b625b01ffa0a0,Metadata:&ContainerMetadata{Name:hello-world-app,Attempt:0,},Image:&ImageSpec{Image:docker.io/kicbase/echo-server@sha256:127ac38a2bb9537b7f252addff209ea6801edcac8a92c8b1104dacd66a583ed6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9056ab77afb8e18e04303f11000a9d31b3f16b74c59475b899ae1b342d328d30,State:CONTAINER_RUNNING,CreatedAt:1722898627258550448,Labels:map[string]string{io.kubernetes.container.name: hello-world-app,io.kubernetes.pod.name: hello-world-app-6778b5fc9f-nbsh9,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 18dc8ba2-00d5-49a3-891c-7e66fff40039,},Annotations:map[string]string{io.kubernetes.container.hash: 5f2dd573,io.kubernetes.container.
ports: [{\"containerPort\":8080,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:30e682946ae982b910587c4dfd32ee4b18fb9be6ffc0c0ed2c73c3bcaccab5b3,PodSandboxId:191ee1227f0613fe15909c9265bcdf71f2df55c0514d55c7c442ab0cc2dd6591,Metadata:&ContainerMetadata{Name:nginx,Attempt:0,},Image:&ImageSpec{Image:docker.io/library/nginx@sha256:208b70eefac13ee9be00e486f79c695b15cef861c680527171a27d253d834be9,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1ae23480369fa4139f6dec668d7a5a941b56ea174e9cf75e09771988fe621c95,State:CONTAINER_RUNNING,CreatedAt:1722898488055385763,Labels:map[string]string{io.kubernetes.container.name: nginx,io.kubernetes.pod.name: nginx,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: f96f1bbf-3982-41a3-94f0-5cab0827ddb3,},Annotations:map[string]string{io.kubernet
es.container.hash: cfbed574,io.kubernetes.container.ports: [{\"containerPort\":80,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:279179149c33b3043b28d0af3a8612082ccf6cd6248319270f1dfdc7fc567211,PodSandboxId:3231f54f5336a24e2ef0cf19c8327249775ae9e3c236f930ed00e3ef1110ed36,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c,State:CONTAINER_RUNNING,CreatedAt:1722898399559693191,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 973bc6a0-8c5b-48a5-a
795-cc389f59d219,},Annotations:map[string]string{io.kubernetes.container.hash: 2b7f493d,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e99dd1415b76169a8c5445723cbdd7ed97fdcb7634e1df69cd4bfbe931586e5c,PodSandboxId:ed92c770dec5bd3889ce97e88ed8be9ff693fdb8eea77b512ea1db97a0a8dadf,Metadata:&ContainerMetadata{Name:patch,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:35379defc3e7025b1c00d37092f560ce87d06ea5ab35d04ff8a0cf22d316bcf2,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:684c5ea3b61b299cd4e713c10bfd8989341da91f6175e2e6e502869c0781fb66,State:CONTAINER_EXITED,CreatedAt:1722898317737918391,Labels:map[string]string{io.kubernetes.container.name: patch,io.kubernetes.pod.name: ingress-nginx-admission-patch-dsnb7,io.kubernetes.pod.namespace: ingress-nginx,io.kuber
netes.pod.uid: 56bfd78b-7481-4a9f-879d-0bcdbcf050cd,},Annotations:map[string]string{io.kubernetes.container.hash: 667cbb1b,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:85cf8d86410bef1c025ffe59434653e15360767265658f89ea62a0c43a9e5ca2,PodSandboxId:1f72d7f927f94469b7d55eb5cea7e2f8d6ef1452d24145b366e463af6ae59884,Metadata:&ContainerMetadata{Name:create,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:35379defc3e7025b1c00d37092f560ce87d06ea5ab35d04ff8a0cf22d316bcf2,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:684c5ea3b61b299cd4e713c10bfd8989341da91f6175e2e6e502869c0781fb66,State:CONTAINER_EXITED,CreatedAt:1722898317639582895,Labels:map[string]string{io.kubernetes.container.name: create,io.kubernetes.pod.name: ingress-nginx-admission-create-wp8kk,io.kubernetes
.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: ae26cc1e-446c-4ab4-8cb1-7719e5cfb06f,},Annotations:map[string]string{io.kubernetes.container.hash: ce20c556,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:431e42b0b0b158402f10cb7b93827a107055987140e4dce351b570dc3f93facd,PodSandboxId:b020eb62a7b0f24faa5795c0c3d3869f774509a7eecdb34c96cfb5f299c3babf,Metadata:&ContainerMetadata{Name:metrics-server,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/metrics-server/metrics-server@sha256:31f034feb3f16062e93be7c40efc596553c89de172e2e412e588f02382388872,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a24c7c057ec8730aaa152f77366454835a46dc699fcf243698a622788fd48d62,State:CONTAINER_RUNNING,CreatedAt:1722898302555177776,Labels:map[string]string{io.kubernetes.container.name: metrics-server,io.kubernetes.pod.name:
metrics-server-c59844bb4-m9t52,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f825462d-de15-4aa7-9436-76eda3bbd66f,},Annotations:map[string]string{io.kubernetes.container.hash: 62e4cceb,io.kubernetes.container.ports: [{\"name\":\"https\",\"containerPort\":4443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7b5c994323214402a42053a26dbdf6aaa73eeb251beee1a898876e1c323893d5,PodSandboxId:3129664cc0275e797e40cebd7629f9013b4ad7642ad0896ad1cef672c78146a5,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1722898250327121182,La
bels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: cfbc5ee9-491f-4c8d-aecc-72ba061092ec,},Annotations:map[string]string{io.kubernetes.container.hash: 1c0a402c,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0abdbd3ed10f077f41965f1ab420f42938621e8dbc61df531790ac2ee7e9c40e,PodSandboxId:1055162b97d8516c24dc2a85ed57c9facac9405e2411fd71e3390b11f0b160b4,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1722898246723247613,Labels:map[string]string{i
o.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-ng8rk,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2091f1e9-b1aa-45fd-8197-0f661fcf784e,},Annotations:map[string]string{io.kubernetes.container.hash: 89f0555d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ffd14a580eef1dd67f8e26cf09eeb41251619feba45e4ab0d12f7f5b32879188,PodSandboxId:22bb8d21f29a210bd60addbae54caa6a518370f2cb4e18a6e41d2c21019b1d38,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,Annotations:map[string]str
ing{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,State:CONTAINER_RUNNING,CreatedAt:1722898243472744671,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-lt8r2,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c1a7c99c-379f-4e2d-b241-4de97adffa76,},Annotations:map[string]string{io.kubernetes.container.hash: 3ab3dbc6,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e58d0c10af25f73f245cd49ac44d141e0b4dc75e8e4ac8995698b79ed373af5e,PodSandboxId:961be72ba0eb869239d98780834acef5b053ceeed32f94be162b3be2cf91ec70,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,Annotations:map[string]string{},UserSpecifiedImage:,Run
timeHandler:,},ImageRef:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,State:CONTAINER_RUNNING,CreatedAt:1722898224416768929,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-addons-435364,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 321c366bd160eeee564705797a7fc2fc,},Annotations:map[string]string{io.kubernetes.container.hash: 7337c8d9,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:de461982723232193cc406adb03555f3314162eaba4b5e3472d116ab53272189,PodSandboxId:b4dfc4b300ffc8791cc6d909cd97644db5094407db26e8ee6de5b4357f14ce25,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:386
1cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1722898224450036379,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-addons-435364,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0d200a2d8f14313b20affd7e51da4716,},Annotations:map[string]string{io.kubernetes.container.hash: f267c287,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b5b169a97f6f0fee85e8a3c58958ef344c63040a0d46d50b287ab5277d491e7d,PodSandboxId:f3b9318379ac35b248f9d1a079b0c94d03813100d40a7289914625df00dcf608,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:76932a3b37d7eb138c8f47c9a2b4218
f0466dd273badf856f2ce2f0277e15b5e,State:CONTAINER_RUNNING,CreatedAt:1722898224463679631,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-addons-435364,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 18812a5d71e8307dfae178321f661472,},Annotations:map[string]string{io.kubernetes.container.hash: bcb4918f,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:92eafd2fe5370e20300cf4b57a5758e16e3dee2bb64c465c25b601d07f7aa4c6,PodSandboxId:fef94c54938c430ddc6f396f0cac092b131d58fcf51ba251606475e1c80854d1,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1f6d574d502f3b61c851b1bbd4ef2a
964ce4c70071dd8da556f2d490d36b095d,State:CONTAINER_RUNNING,CreatedAt:1722898224394938513,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-addons-435364,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3913d430a2d94646f23a316dc2057cb3,},Annotations:map[string]string{io.kubernetes.container.hash: 9ffc2af7,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=b3265121-7f54-4e0c-a129-c0f2caac8df0 name=/runtime.v1.RuntimeService/ListContainers
	Aug 05 22:57:13 addons-435364 crio[676]: time="2024-08-05 22:57:13.915542695Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=738ad75a-613c-44f1-944d-5434d668d5f5 name=/runtime.v1.RuntimeService/Version
	Aug 05 22:57:13 addons-435364 crio[676]: time="2024-08-05 22:57:13.915683320Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=738ad75a-613c-44f1-944d-5434d668d5f5 name=/runtime.v1.RuntimeService/Version
	Aug 05 22:57:13 addons-435364 crio[676]: time="2024-08-05 22:57:13.918214513Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=ad0ba9c1-9e90-474e-a8c7-2d24140f3740 name=/runtime.v1.ImageService/ImageFsInfo
	Aug 05 22:57:13 addons-435364 crio[676]: time="2024-08-05 22:57:13.919925896Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1722898633919892915,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:589584,},InodesUsed:&UInt64Value{Value:210,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=ad0ba9c1-9e90-474e-a8c7-2d24140f3740 name=/runtime.v1.ImageService/ImageFsInfo
	Aug 05 22:57:13 addons-435364 crio[676]: time="2024-08-05 22:57:13.921028653Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=b9d1f471-0781-4113-bd81-e63088ba9808 name=/runtime.v1.RuntimeService/ListContainers
	Aug 05 22:57:13 addons-435364 crio[676]: time="2024-08-05 22:57:13.921105631Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=b9d1f471-0781-4113-bd81-e63088ba9808 name=/runtime.v1.RuntimeService/ListContainers
	Aug 05 22:57:13 addons-435364 crio[676]: time="2024-08-05 22:57:13.921459448Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:9dce877f2c362a263adc22fa7c1dff8aa7deca2278b49b2cc88d482a8b6b4d04,PodSandboxId:8fd827d106ec2d0907c305fa69adb920d81b4078582840c0168b625b01ffa0a0,Metadata:&ContainerMetadata{Name:hello-world-app,Attempt:0,},Image:&ImageSpec{Image:docker.io/kicbase/echo-server@sha256:127ac38a2bb9537b7f252addff209ea6801edcac8a92c8b1104dacd66a583ed6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9056ab77afb8e18e04303f11000a9d31b3f16b74c59475b899ae1b342d328d30,State:CONTAINER_RUNNING,CreatedAt:1722898627258550448,Labels:map[string]string{io.kubernetes.container.name: hello-world-app,io.kubernetes.pod.name: hello-world-app-6778b5fc9f-nbsh9,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 18dc8ba2-00d5-49a3-891c-7e66fff40039,},Annotations:map[string]string{io.kubernetes.container.hash: 5f2dd573,io.kubernetes.container.
ports: [{\"containerPort\":8080,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:30e682946ae982b910587c4dfd32ee4b18fb9be6ffc0c0ed2c73c3bcaccab5b3,PodSandboxId:191ee1227f0613fe15909c9265bcdf71f2df55c0514d55c7c442ab0cc2dd6591,Metadata:&ContainerMetadata{Name:nginx,Attempt:0,},Image:&ImageSpec{Image:docker.io/library/nginx@sha256:208b70eefac13ee9be00e486f79c695b15cef861c680527171a27d253d834be9,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1ae23480369fa4139f6dec668d7a5a941b56ea174e9cf75e09771988fe621c95,State:CONTAINER_RUNNING,CreatedAt:1722898488055385763,Labels:map[string]string{io.kubernetes.container.name: nginx,io.kubernetes.pod.name: nginx,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: f96f1bbf-3982-41a3-94f0-5cab0827ddb3,},Annotations:map[string]string{io.kubernet
es.container.hash: cfbed574,io.kubernetes.container.ports: [{\"containerPort\":80,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:279179149c33b3043b28d0af3a8612082ccf6cd6248319270f1dfdc7fc567211,PodSandboxId:3231f54f5336a24e2ef0cf19c8327249775ae9e3c236f930ed00e3ef1110ed36,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c,State:CONTAINER_RUNNING,CreatedAt:1722898399559693191,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 973bc6a0-8c5b-48a5-a
795-cc389f59d219,},Annotations:map[string]string{io.kubernetes.container.hash: 2b7f493d,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e99dd1415b76169a8c5445723cbdd7ed97fdcb7634e1df69cd4bfbe931586e5c,PodSandboxId:ed92c770dec5bd3889ce97e88ed8be9ff693fdb8eea77b512ea1db97a0a8dadf,Metadata:&ContainerMetadata{Name:patch,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:35379defc3e7025b1c00d37092f560ce87d06ea5ab35d04ff8a0cf22d316bcf2,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:684c5ea3b61b299cd4e713c10bfd8989341da91f6175e2e6e502869c0781fb66,State:CONTAINER_EXITED,CreatedAt:1722898317737918391,Labels:map[string]string{io.kubernetes.container.name: patch,io.kubernetes.pod.name: ingress-nginx-admission-patch-dsnb7,io.kubernetes.pod.namespace: ingress-nginx,io.kuber
netes.pod.uid: 56bfd78b-7481-4a9f-879d-0bcdbcf050cd,},Annotations:map[string]string{io.kubernetes.container.hash: 667cbb1b,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:85cf8d86410bef1c025ffe59434653e15360767265658f89ea62a0c43a9e5ca2,PodSandboxId:1f72d7f927f94469b7d55eb5cea7e2f8d6ef1452d24145b366e463af6ae59884,Metadata:&ContainerMetadata{Name:create,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:35379defc3e7025b1c00d37092f560ce87d06ea5ab35d04ff8a0cf22d316bcf2,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:684c5ea3b61b299cd4e713c10bfd8989341da91f6175e2e6e502869c0781fb66,State:CONTAINER_EXITED,CreatedAt:1722898317639582895,Labels:map[string]string{io.kubernetes.container.name: create,io.kubernetes.pod.name: ingress-nginx-admission-create-wp8kk,io.kubernetes
.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: ae26cc1e-446c-4ab4-8cb1-7719e5cfb06f,},Annotations:map[string]string{io.kubernetes.container.hash: ce20c556,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:431e42b0b0b158402f10cb7b93827a107055987140e4dce351b570dc3f93facd,PodSandboxId:b020eb62a7b0f24faa5795c0c3d3869f774509a7eecdb34c96cfb5f299c3babf,Metadata:&ContainerMetadata{Name:metrics-server,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/metrics-server/metrics-server@sha256:31f034feb3f16062e93be7c40efc596553c89de172e2e412e588f02382388872,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a24c7c057ec8730aaa152f77366454835a46dc699fcf243698a622788fd48d62,State:CONTAINER_RUNNING,CreatedAt:1722898302555177776,Labels:map[string]string{io.kubernetes.container.name: metrics-server,io.kubernetes.pod.name:
metrics-server-c59844bb4-m9t52,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f825462d-de15-4aa7-9436-76eda3bbd66f,},Annotations:map[string]string{io.kubernetes.container.hash: 62e4cceb,io.kubernetes.container.ports: [{\"name\":\"https\",\"containerPort\":4443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7b5c994323214402a42053a26dbdf6aaa73eeb251beee1a898876e1c323893d5,PodSandboxId:3129664cc0275e797e40cebd7629f9013b4ad7642ad0896ad1cef672c78146a5,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1722898250327121182,La
bels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: cfbc5ee9-491f-4c8d-aecc-72ba061092ec,},Annotations:map[string]string{io.kubernetes.container.hash: 1c0a402c,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0abdbd3ed10f077f41965f1ab420f42938621e8dbc61df531790ac2ee7e9c40e,PodSandboxId:1055162b97d8516c24dc2a85ed57c9facac9405e2411fd71e3390b11f0b160b4,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1722898246723247613,Labels:map[string]string{i
o.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-ng8rk,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2091f1e9-b1aa-45fd-8197-0f661fcf784e,},Annotations:map[string]string{io.kubernetes.container.hash: 89f0555d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ffd14a580eef1dd67f8e26cf09eeb41251619feba45e4ab0d12f7f5b32879188,PodSandboxId:22bb8d21f29a210bd60addbae54caa6a518370f2cb4e18a6e41d2c21019b1d38,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,Annotations:map[string]str
ing{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,State:CONTAINER_RUNNING,CreatedAt:1722898243472744671,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-lt8r2,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c1a7c99c-379f-4e2d-b241-4de97adffa76,},Annotations:map[string]string{io.kubernetes.container.hash: 3ab3dbc6,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e58d0c10af25f73f245cd49ac44d141e0b4dc75e8e4ac8995698b79ed373af5e,PodSandboxId:961be72ba0eb869239d98780834acef5b053ceeed32f94be162b3be2cf91ec70,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,Annotations:map[string]string{},UserSpecifiedImage:,Run
timeHandler:,},ImageRef:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,State:CONTAINER_RUNNING,CreatedAt:1722898224416768929,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-addons-435364,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 321c366bd160eeee564705797a7fc2fc,},Annotations:map[string]string{io.kubernetes.container.hash: 7337c8d9,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:de461982723232193cc406adb03555f3314162eaba4b5e3472d116ab53272189,PodSandboxId:b4dfc4b300ffc8791cc6d909cd97644db5094407db26e8ee6de5b4357f14ce25,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:386
1cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1722898224450036379,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-addons-435364,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0d200a2d8f14313b20affd7e51da4716,},Annotations:map[string]string{io.kubernetes.container.hash: f267c287,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b5b169a97f6f0fee85e8a3c58958ef344c63040a0d46d50b287ab5277d491e7d,PodSandboxId:f3b9318379ac35b248f9d1a079b0c94d03813100d40a7289914625df00dcf608,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:76932a3b37d7eb138c8f47c9a2b4218
f0466dd273badf856f2ce2f0277e15b5e,State:CONTAINER_RUNNING,CreatedAt:1722898224463679631,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-addons-435364,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 18812a5d71e8307dfae178321f661472,},Annotations:map[string]string{io.kubernetes.container.hash: bcb4918f,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:92eafd2fe5370e20300cf4b57a5758e16e3dee2bb64c465c25b601d07f7aa4c6,PodSandboxId:fef94c54938c430ddc6f396f0cac092b131d58fcf51ba251606475e1c80854d1,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1f6d574d502f3b61c851b1bbd4ef2a
964ce4c70071dd8da556f2d490d36b095d,State:CONTAINER_RUNNING,CreatedAt:1722898224394938513,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-addons-435364,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3913d430a2d94646f23a316dc2057cb3,},Annotations:map[string]string{io.kubernetes.container.hash: 9ffc2af7,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=b9d1f471-0781-4113-bd81-e63088ba9808 name=/runtime.v1.RuntimeService/ListContainers
	Aug 05 22:57:13 addons-435364 crio[676]: time="2024-08-05 22:57:13.958046791Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=cb730240-e11b-4234-bfe0-60492a27dfff name=/runtime.v1.RuntimeService/Version
	Aug 05 22:57:13 addons-435364 crio[676]: time="2024-08-05 22:57:13.958124900Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=cb730240-e11b-4234-bfe0-60492a27dfff name=/runtime.v1.RuntimeService/Version
	Aug 05 22:57:13 addons-435364 crio[676]: time="2024-08-05 22:57:13.959344282Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=5d9067ae-fee4-4543-a1a6-af9694c043e0 name=/runtime.v1.ImageService/ImageFsInfo
	Aug 05 22:57:13 addons-435364 crio[676]: time="2024-08-05 22:57:13.961281382Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1722898633961252165,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:589584,},InodesUsed:&UInt64Value{Value:210,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=5d9067ae-fee4-4543-a1a6-af9694c043e0 name=/runtime.v1.ImageService/ImageFsInfo
	Aug 05 22:57:13 addons-435364 crio[676]: time="2024-08-05 22:57:13.961795521Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=47df4e6a-237b-49fb-aac4-b5a8027ab1a7 name=/runtime.v1.RuntimeService/ListContainers
	Aug 05 22:57:13 addons-435364 crio[676]: time="2024-08-05 22:57:13.961855491Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=47df4e6a-237b-49fb-aac4-b5a8027ab1a7 name=/runtime.v1.RuntimeService/ListContainers
	Aug 05 22:57:13 addons-435364 crio[676]: time="2024-08-05 22:57:13.962226432Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:9dce877f2c362a263adc22fa7c1dff8aa7deca2278b49b2cc88d482a8b6b4d04,PodSandboxId:8fd827d106ec2d0907c305fa69adb920d81b4078582840c0168b625b01ffa0a0,Metadata:&ContainerMetadata{Name:hello-world-app,Attempt:0,},Image:&ImageSpec{Image:docker.io/kicbase/echo-server@sha256:127ac38a2bb9537b7f252addff209ea6801edcac8a92c8b1104dacd66a583ed6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9056ab77afb8e18e04303f11000a9d31b3f16b74c59475b899ae1b342d328d30,State:CONTAINER_RUNNING,CreatedAt:1722898627258550448,Labels:map[string]string{io.kubernetes.container.name: hello-world-app,io.kubernetes.pod.name: hello-world-app-6778b5fc9f-nbsh9,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 18dc8ba2-00d5-49a3-891c-7e66fff40039,},Annotations:map[string]string{io.kubernetes.container.hash: 5f2dd573,io.kubernetes.container.
ports: [{\"containerPort\":8080,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:30e682946ae982b910587c4dfd32ee4b18fb9be6ffc0c0ed2c73c3bcaccab5b3,PodSandboxId:191ee1227f0613fe15909c9265bcdf71f2df55c0514d55c7c442ab0cc2dd6591,Metadata:&ContainerMetadata{Name:nginx,Attempt:0,},Image:&ImageSpec{Image:docker.io/library/nginx@sha256:208b70eefac13ee9be00e486f79c695b15cef861c680527171a27d253d834be9,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1ae23480369fa4139f6dec668d7a5a941b56ea174e9cf75e09771988fe621c95,State:CONTAINER_RUNNING,CreatedAt:1722898488055385763,Labels:map[string]string{io.kubernetes.container.name: nginx,io.kubernetes.pod.name: nginx,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: f96f1bbf-3982-41a3-94f0-5cab0827ddb3,},Annotations:map[string]string{io.kubernet
es.container.hash: cfbed574,io.kubernetes.container.ports: [{\"containerPort\":80,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:279179149c33b3043b28d0af3a8612082ccf6cd6248319270f1dfdc7fc567211,PodSandboxId:3231f54f5336a24e2ef0cf19c8327249775ae9e3c236f930ed00e3ef1110ed36,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c,State:CONTAINER_RUNNING,CreatedAt:1722898399559693191,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 973bc6a0-8c5b-48a5-a
795-cc389f59d219,},Annotations:map[string]string{io.kubernetes.container.hash: 2b7f493d,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e99dd1415b76169a8c5445723cbdd7ed97fdcb7634e1df69cd4bfbe931586e5c,PodSandboxId:ed92c770dec5bd3889ce97e88ed8be9ff693fdb8eea77b512ea1db97a0a8dadf,Metadata:&ContainerMetadata{Name:patch,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:35379defc3e7025b1c00d37092f560ce87d06ea5ab35d04ff8a0cf22d316bcf2,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:684c5ea3b61b299cd4e713c10bfd8989341da91f6175e2e6e502869c0781fb66,State:CONTAINER_EXITED,CreatedAt:1722898317737918391,Labels:map[string]string{io.kubernetes.container.name: patch,io.kubernetes.pod.name: ingress-nginx-admission-patch-dsnb7,io.kubernetes.pod.namespace: ingress-nginx,io.kuber
netes.pod.uid: 56bfd78b-7481-4a9f-879d-0bcdbcf050cd,},Annotations:map[string]string{io.kubernetes.container.hash: 667cbb1b,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:85cf8d86410bef1c025ffe59434653e15360767265658f89ea62a0c43a9e5ca2,PodSandboxId:1f72d7f927f94469b7d55eb5cea7e2f8d6ef1452d24145b366e463af6ae59884,Metadata:&ContainerMetadata{Name:create,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:35379defc3e7025b1c00d37092f560ce87d06ea5ab35d04ff8a0cf22d316bcf2,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:684c5ea3b61b299cd4e713c10bfd8989341da91f6175e2e6e502869c0781fb66,State:CONTAINER_EXITED,CreatedAt:1722898317639582895,Labels:map[string]string{io.kubernetes.container.name: create,io.kubernetes.pod.name: ingress-nginx-admission-create-wp8kk,io.kubernetes
.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: ae26cc1e-446c-4ab4-8cb1-7719e5cfb06f,},Annotations:map[string]string{io.kubernetes.container.hash: ce20c556,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:431e42b0b0b158402f10cb7b93827a107055987140e4dce351b570dc3f93facd,PodSandboxId:b020eb62a7b0f24faa5795c0c3d3869f774509a7eecdb34c96cfb5f299c3babf,Metadata:&ContainerMetadata{Name:metrics-server,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/metrics-server/metrics-server@sha256:31f034feb3f16062e93be7c40efc596553c89de172e2e412e588f02382388872,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a24c7c057ec8730aaa152f77366454835a46dc699fcf243698a622788fd48d62,State:CONTAINER_RUNNING,CreatedAt:1722898302555177776,Labels:map[string]string{io.kubernetes.container.name: metrics-server,io.kubernetes.pod.name:
metrics-server-c59844bb4-m9t52,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f825462d-de15-4aa7-9436-76eda3bbd66f,},Annotations:map[string]string{io.kubernetes.container.hash: 62e4cceb,io.kubernetes.container.ports: [{\"name\":\"https\",\"containerPort\":4443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7b5c994323214402a42053a26dbdf6aaa73eeb251beee1a898876e1c323893d5,PodSandboxId:3129664cc0275e797e40cebd7629f9013b4ad7642ad0896ad1cef672c78146a5,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1722898250327121182,La
bels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: cfbc5ee9-491f-4c8d-aecc-72ba061092ec,},Annotations:map[string]string{io.kubernetes.container.hash: 1c0a402c,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0abdbd3ed10f077f41965f1ab420f42938621e8dbc61df531790ac2ee7e9c40e,PodSandboxId:1055162b97d8516c24dc2a85ed57c9facac9405e2411fd71e3390b11f0b160b4,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1722898246723247613,Labels:map[string]string{i
o.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-ng8rk,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2091f1e9-b1aa-45fd-8197-0f661fcf784e,},Annotations:map[string]string{io.kubernetes.container.hash: 89f0555d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ffd14a580eef1dd67f8e26cf09eeb41251619feba45e4ab0d12f7f5b32879188,PodSandboxId:22bb8d21f29a210bd60addbae54caa6a518370f2cb4e18a6e41d2c21019b1d38,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,Annotations:map[string]str
ing{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,State:CONTAINER_RUNNING,CreatedAt:1722898243472744671,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-lt8r2,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c1a7c99c-379f-4e2d-b241-4de97adffa76,},Annotations:map[string]string{io.kubernetes.container.hash: 3ab3dbc6,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e58d0c10af25f73f245cd49ac44d141e0b4dc75e8e4ac8995698b79ed373af5e,PodSandboxId:961be72ba0eb869239d98780834acef5b053ceeed32f94be162b3be2cf91ec70,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,Annotations:map[string]string{},UserSpecifiedImage:,Run
timeHandler:,},ImageRef:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,State:CONTAINER_RUNNING,CreatedAt:1722898224416768929,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-addons-435364,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 321c366bd160eeee564705797a7fc2fc,},Annotations:map[string]string{io.kubernetes.container.hash: 7337c8d9,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:de461982723232193cc406adb03555f3314162eaba4b5e3472d116ab53272189,PodSandboxId:b4dfc4b300ffc8791cc6d909cd97644db5094407db26e8ee6de5b4357f14ce25,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:386
1cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1722898224450036379,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-addons-435364,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0d200a2d8f14313b20affd7e51da4716,},Annotations:map[string]string{io.kubernetes.container.hash: f267c287,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b5b169a97f6f0fee85e8a3c58958ef344c63040a0d46d50b287ab5277d491e7d,PodSandboxId:f3b9318379ac35b248f9d1a079b0c94d03813100d40a7289914625df00dcf608,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:76932a3b37d7eb138c8f47c9a2b4218
f0466dd273badf856f2ce2f0277e15b5e,State:CONTAINER_RUNNING,CreatedAt:1722898224463679631,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-addons-435364,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 18812a5d71e8307dfae178321f661472,},Annotations:map[string]string{io.kubernetes.container.hash: bcb4918f,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:92eafd2fe5370e20300cf4b57a5758e16e3dee2bb64c465c25b601d07f7aa4c6,PodSandboxId:fef94c54938c430ddc6f396f0cac092b131d58fcf51ba251606475e1c80854d1,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1f6d574d502f3b61c851b1bbd4ef2a
964ce4c70071dd8da556f2d490d36b095d,State:CONTAINER_RUNNING,CreatedAt:1722898224394938513,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-addons-435364,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3913d430a2d94646f23a316dc2057cb3,},Annotations:map[string]string{io.kubernetes.container.hash: 9ffc2af7,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=47df4e6a-237b-49fb-aac4-b5a8027ab1a7 name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                                        CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	9dce877f2c362       docker.io/kicbase/echo-server@sha256:127ac38a2bb9537b7f252addff209ea6801edcac8a92c8b1104dacd66a583ed6                        6 seconds ago       Running             hello-world-app           0                   8fd827d106ec2       hello-world-app-6778b5fc9f-nbsh9
	30e682946ae98       docker.io/library/nginx@sha256:208b70eefac13ee9be00e486f79c695b15cef861c680527171a27d253d834be9                              2 minutes ago       Running             nginx                     0                   191ee1227f061       nginx
	279179149c33b       gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e                          3 minutes ago       Running             busybox                   0                   3231f54f5336a       busybox
	e99dd1415b761       registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:35379defc3e7025b1c00d37092f560ce87d06ea5ab35d04ff8a0cf22d316bcf2   5 minutes ago       Exited              patch                     0                   ed92c770dec5b       ingress-nginx-admission-patch-dsnb7
	85cf8d86410be       registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:35379defc3e7025b1c00d37092f560ce87d06ea5ab35d04ff8a0cf22d316bcf2   5 minutes ago       Exited              create                    0                   1f72d7f927f94       ingress-nginx-admission-create-wp8kk
	431e42b0b0b15       registry.k8s.io/metrics-server/metrics-server@sha256:31f034feb3f16062e93be7c40efc596553c89de172e2e412e588f02382388872        5 minutes ago       Running             metrics-server            0                   b020eb62a7b0f       metrics-server-c59844bb4-m9t52
	7b5c994323214       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                                             6 minutes ago       Running             storage-provisioner       0                   3129664cc0275       storage-provisioner
	0abdbd3ed10f0       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4                                                             6 minutes ago       Running             coredns                   0                   1055162b97d85       coredns-7db6d8ff4d-ng8rk
	ffd14a580eef1       55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1                                                             6 minutes ago       Running             kube-proxy                0                   22bb8d21f29a2       kube-proxy-lt8r2
	b5b169a97f6f0       76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e                                                             6 minutes ago       Running             kube-controller-manager   0                   f3b9318379ac3       kube-controller-manager-addons-435364
	de46198272323       3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899                                                             6 minutes ago       Running             etcd                      0                   b4dfc4b300ffc       etcd-addons-435364
	e58d0c10af25f       3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2                                                             6 minutes ago       Running             kube-scheduler            0                   961be72ba0eb8       kube-scheduler-addons-435364
	92eafd2fe5370       1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d                                                             6 minutes ago       Running             kube-apiserver            0                   fef94c54938c4       kube-apiserver-addons-435364
	
	
	==> coredns [0abdbd3ed10f077f41965f1ab420f42938621e8dbc61df531790ac2ee7e9c40e] <==
	[INFO] 10.244.0.7:59460 - 7528 "A IN registry.kube-system.svc.cluster.local.kube-system.svc.cluster.local. udp 86 false 512" NXDOMAIN qr,aa,rd 179 0.000197257s
	[INFO] 10.244.0.7:36528 - 7268 "AAAA IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 74 false 512" NXDOMAIN qr,aa,rd 167 0.000131316s
	[INFO] 10.244.0.7:36528 - 50016 "A IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 74 false 512" NXDOMAIN qr,aa,rd 167 0.000114669s
	[INFO] 10.244.0.7:50164 - 20174 "AAAA IN registry.kube-system.svc.cluster.local.cluster.local. udp 70 false 512" NXDOMAIN qr,aa,rd 163 0.000094933s
	[INFO] 10.244.0.7:50164 - 31949 "A IN registry.kube-system.svc.cluster.local.cluster.local. udp 70 false 512" NXDOMAIN qr,aa,rd 163 0.000147131s
	[INFO] 10.244.0.7:39320 - 31278 "AAAA IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 149 0.00010435s
	[INFO] 10.244.0.7:39320 - 58668 "A IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 110 0.000075439s
	[INFO] 10.244.0.7:53046 - 48151 "A IN registry.kube-system.svc.cluster.local.kube-system.svc.cluster.local. udp 86 false 512" NXDOMAIN qr,aa,rd 179 0.000088933s
	[INFO] 10.244.0.7:53046 - 45824 "AAAA IN registry.kube-system.svc.cluster.local.kube-system.svc.cluster.local. udp 86 false 512" NXDOMAIN qr,aa,rd 179 0.000070828s
	[INFO] 10.244.0.7:34899 - 32792 "A IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 74 false 512" NXDOMAIN qr,aa,rd 167 0.000073984s
	[INFO] 10.244.0.7:34899 - 64541 "AAAA IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 74 false 512" NXDOMAIN qr,aa,rd 167 0.000031781s
	[INFO] 10.244.0.7:44128 - 58280 "A IN registry.kube-system.svc.cluster.local.cluster.local. udp 70 false 512" NXDOMAIN qr,aa,rd 163 0.000070705s
	[INFO] 10.244.0.7:44128 - 48046 "AAAA IN registry.kube-system.svc.cluster.local.cluster.local. udp 70 false 512" NXDOMAIN qr,aa,rd 163 0.000053088s
	[INFO] 10.244.0.7:39146 - 12662 "A IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 110 0.000049025s
	[INFO] 10.244.0.7:39146 - 50551 "AAAA IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 149 0.000037963s
	[INFO] 10.244.0.21:55956 - 61218 "A IN storage.googleapis.com.gcp-auth.svc.cluster.local. udp 78 false 1232" NXDOMAIN qr,aa,rd 160 0.000362377s
	[INFO] 10.244.0.21:43818 - 41906 "AAAA IN storage.googleapis.com.gcp-auth.svc.cluster.local. udp 78 false 1232" NXDOMAIN qr,aa,rd 160 0.000070895s
	[INFO] 10.244.0.21:60942 - 59673 "A IN storage.googleapis.com.svc.cluster.local. udp 69 false 1232" NXDOMAIN qr,aa,rd 151 0.000117368s
	[INFO] 10.244.0.21:36743 - 47392 "AAAA IN storage.googleapis.com.svc.cluster.local. udp 69 false 1232" NXDOMAIN qr,aa,rd 151 0.000062908s
	[INFO] 10.244.0.21:54746 - 35272 "AAAA IN storage.googleapis.com.cluster.local. udp 65 false 1232" NXDOMAIN qr,aa,rd 147 0.000172605s
	[INFO] 10.244.0.21:32900 - 59100 "A IN storage.googleapis.com.cluster.local. udp 65 false 1232" NXDOMAIN qr,aa,rd 147 0.000108178s
	[INFO] 10.244.0.21:36957 - 16274 "AAAA IN storage.googleapis.com. udp 51 false 1232" NOERROR qr,rd,ra 240 0.000499387s
	[INFO] 10.244.0.21:42027 - 9559 "A IN storage.googleapis.com. udp 51 false 1232" NOERROR qr,rd,ra 496 0.000601238s
	[INFO] 10.244.0.26:43054 - 2 "AAAA IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 149 0.000443228s
	[INFO] 10.244.0.26:49964 - 3 "A IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 110 0.000095956s
	
	
	==> describe nodes <==
	Name:               addons-435364
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=addons-435364
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=a179f531dd2dbe55e0d6074abcbc378280f91bb4
	                    minikube.k8s.io/name=addons-435364
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_08_05T22_50_30_0700
	                    minikube.k8s.io/version=v1.33.1
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	                    topology.hostpath.csi/node=addons-435364
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 05 Aug 2024 22:50:26 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  addons-435364
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 05 Aug 2024 22:57:07 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 05 Aug 2024 22:55:06 +0000   Mon, 05 Aug 2024 22:50:25 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 05 Aug 2024 22:55:06 +0000   Mon, 05 Aug 2024 22:50:25 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 05 Aug 2024 22:55:06 +0000   Mon, 05 Aug 2024 22:50:25 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 05 Aug 2024 22:55:06 +0000   Mon, 05 Aug 2024 22:50:30 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.129
	  Hostname:    addons-435364
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             3912780Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             3912780Ki
	  pods:               110
	System Info:
	  Machine ID:                 242967d5bc594151bd5fc013cd6dfd9d
	  System UUID:                242967d5-bc59-4151-bd5f-c013cd6dfd9d
	  Boot ID:                    bba553dc-ef04-4531-98f6-7a74d426d8f2
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.30.3
	  Kube-Proxy Version:         v1.30.3
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (11 in total)
	  Namespace                   Name                                     CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                     ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                  0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         3m58s
	  default                     hello-world-app-6778b5fc9f-nbsh9         0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         10s
	  default                     nginx                                    0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         2m31s
	  kube-system                 coredns-7db6d8ff4d-ng8rk                 100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (1%!)(MISSING)        170Mi (4%!)(MISSING)     6m31s
	  kube-system                 etcd-addons-435364                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (2%!)(MISSING)       0 (0%!)(MISSING)         6m46s
	  kube-system                 kube-apiserver-addons-435364             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         6m45s
	  kube-system                 kube-controller-manager-addons-435364    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         6m45s
	  kube-system                 kube-proxy-lt8r2                         0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         6m31s
	  kube-system                 kube-scheduler-addons-435364             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         6m46s
	  kube-system                 metrics-server-c59844bb4-m9t52           100m (5%!)(MISSING)     0 (0%!)(MISSING)      200Mi (5%!)(MISSING)       0 (0%!)(MISSING)         6m25s
	  kube-system                 storage-provisioner                      0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         6m26s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (42%!)(MISSING)  0 (0%!)(MISSING)
	  memory             370Mi (9%!)(MISSING)  170Mi (4%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age    From             Message
	  ----    ------                   ----   ----             -------
	  Normal  Starting                 6m30s  kube-proxy       
	  Normal  Starting                 6m45s  kubelet          Starting kubelet.
	  Normal  NodeAllocatableEnforced  6m45s  kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  6m45s  kubelet          Node addons-435364 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    6m45s  kubelet          Node addons-435364 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     6m45s  kubelet          Node addons-435364 status is now: NodeHasSufficientPID
	  Normal  NodeReady                6m44s  kubelet          Node addons-435364 status is now: NodeReady
	  Normal  RegisteredNode           6m32s  node-controller  Node addons-435364 event: Registered Node addons-435364 in Controller
	
	
	==> dmesg <==
	[Aug 5 22:51] kauditd_printk_skb: 100 callbacks suppressed
	[ +28.171030] kauditd_printk_skb: 4 callbacks suppressed
	[ +10.816151] kauditd_printk_skb: 27 callbacks suppressed
	[  +8.115843] kauditd_printk_skb: 2 callbacks suppressed
	[  +5.116573] kauditd_printk_skb: 12 callbacks suppressed
	[  +5.116063] kauditd_printk_skb: 20 callbacks suppressed
	[Aug 5 22:52] kauditd_printk_skb: 88 callbacks suppressed
	[  +8.807448] kauditd_printk_skb: 12 callbacks suppressed
	[ +22.023864] kauditd_printk_skb: 24 callbacks suppressed
	[ +14.122958] kauditd_printk_skb: 24 callbacks suppressed
	[Aug 5 22:53] kauditd_printk_skb: 3 callbacks suppressed
	[  +5.359583] kauditd_printk_skb: 16 callbacks suppressed
	[  +7.631434] kauditd_printk_skb: 24 callbacks suppressed
	[  +8.957808] kauditd_printk_skb: 6 callbacks suppressed
	[  +5.897545] kauditd_printk_skb: 13 callbacks suppressed
	[  +5.002009] kauditd_printk_skb: 15 callbacks suppressed
	[  +5.087794] kauditd_printk_skb: 34 callbacks suppressed
	[  +6.316599] kauditd_printk_skb: 31 callbacks suppressed
	[Aug 5 22:54] kauditd_printk_skb: 7 callbacks suppressed
	[  +5.475253] kauditd_printk_skb: 15 callbacks suppressed
	[  +7.942293] kauditd_printk_skb: 6 callbacks suppressed
	[  +9.168342] kauditd_printk_skb: 10 callbacks suppressed
	[  +8.130342] kauditd_printk_skb: 33 callbacks suppressed
	[  +6.823058] kauditd_printk_skb: 40 callbacks suppressed
	[Aug 5 22:57] kauditd_printk_skb: 41 callbacks suppressed
	
	
	==> etcd [de461982723232193cc406adb03555f3314162eaba4b5e3472d116ab53272189] <==
	{"level":"info","ts":"2024-08-05T22:51:48.073317Z","caller":"traceutil/trace.go:171","msg":"trace[45443577] range","detail":"{range_begin:/registry/pods/ingress-nginx/; range_end:/registry/pods/ingress-nginx0; response_count:3; response_revision:976; }","duration":"187.000952ms","start":"2024-08-05T22:51:47.886308Z","end":"2024-08-05T22:51:48.073309Z","steps":["trace[45443577] 'agreement among raft nodes before linearized reading'  (duration: 186.862347ms)"],"step_count":1}
	{"level":"warn","ts":"2024-08-05T22:51:48.073518Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"199.500646ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods/kube-system/\" range_end:\"/registry/pods/kube-system0\" ","response":"range_response_count:18 size:85510"}
	{"level":"info","ts":"2024-08-05T22:51:48.073648Z","caller":"traceutil/trace.go:171","msg":"trace[635675946] range","detail":"{range_begin:/registry/pods/kube-system/; range_end:/registry/pods/kube-system0; response_count:18; response_revision:976; }","duration":"199.653443ms","start":"2024-08-05T22:51:47.873984Z","end":"2024-08-05T22:51:48.073638Z","steps":["trace[635675946] 'agreement among raft nodes before linearized reading'  (duration: 199.305766ms)"],"step_count":1}
	{"level":"info","ts":"2024-08-05T22:51:51.775295Z","caller":"traceutil/trace.go:171","msg":"trace[1982578713] transaction","detail":"{read_only:false; response_revision:1002; number_of_response:1; }","duration":"304.708861ms","start":"2024-08-05T22:51:51.470572Z","end":"2024-08-05T22:51:51.77528Z","steps":["trace[1982578713] 'process raft request'  (duration: 304.611449ms)"],"step_count":1}
	{"level":"warn","ts":"2024-08-05T22:51:51.775421Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-08-05T22:51:51.470555Z","time spent":"304.780957ms","remote":"127.0.0.1:43460","response type":"/etcdserverpb.KV/Txn","request count":1,"request size":541,"response count":0,"response size":39,"request content":"compare:<target:MOD key:\"/registry/leases/kube-node-lease/addons-435364\" mod_revision:961 > success:<request_put:<key:\"/registry/leases/kube-node-lease/addons-435364\" value_size:487 >> failure:<request_range:<key:\"/registry/leases/kube-node-lease/addons-435364\" > >"}
	{"level":"info","ts":"2024-08-05T22:52:08.222785Z","caller":"traceutil/trace.go:171","msg":"trace[354439898] transaction","detail":"{read_only:false; response_revision:1133; number_of_response:1; }","duration":"334.360711ms","start":"2024-08-05T22:52:07.887639Z","end":"2024-08-05T22:52:08.222Z","steps":["trace[354439898] 'process raft request'  (duration: 333.7897ms)"],"step_count":1}
	{"level":"warn","ts":"2024-08-05T22:52:08.222902Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-08-05T22:52:07.887584Z","time spent":"335.265726ms","remote":"127.0.0.1:43460","response type":"/etcdserverpb.KV/Txn","request count":1,"request size":539,"response count":0,"response size":39,"request content":"compare:<target:MOD key:\"/registry/leases/kube-system/external-health-monitor-leader-hostpath-csi-k8s-io\" mod_revision:1108 > success:<request_put:<key:\"/registry/leases/kube-system/external-health-monitor-leader-hostpath-csi-k8s-io\" value_size:452 >> failure:<request_range:<key:\"/registry/leases/kube-system/external-health-monitor-leader-hostpath-csi-k8s-io\" > >"}
	{"level":"warn","ts":"2024-08-05T22:53:11.816333Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"434.009449ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods/ingress-nginx/\" range_end:\"/registry/pods/ingress-nginx0\" ","response":"range_response_count:3 size:14363"}
	{"level":"info","ts":"2024-08-05T22:53:11.817453Z","caller":"traceutil/trace.go:171","msg":"trace[228844107] range","detail":"{range_begin:/registry/pods/ingress-nginx/; range_end:/registry/pods/ingress-nginx0; response_count:3; response_revision:1261; }","duration":"435.168121ms","start":"2024-08-05T22:53:11.382259Z","end":"2024-08-05T22:53:11.817427Z","steps":["trace[228844107] 'range keys from in-memory index tree'  (duration: 433.88495ms)"],"step_count":1}
	{"level":"warn","ts":"2024-08-05T22:53:11.817712Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-08-05T22:53:11.382242Z","time spent":"435.44447ms","remote":"127.0.0.1:43364","response type":"/etcdserverpb.KV/Range","request count":0,"request size":62,"response count":3,"response size":14386,"request content":"key:\"/registry/pods/ingress-nginx/\" range_end:\"/registry/pods/ingress-nginx0\" "}
	{"level":"info","ts":"2024-08-05T22:53:14.531Z","caller":"traceutil/trace.go:171","msg":"trace[113767598] linearizableReadLoop","detail":"{readStateIndex:1317; appliedIndex:1316; }","duration":"308.064941ms","start":"2024-08-05T22:53:14.222859Z","end":"2024-08-05T22:53:14.530924Z","steps":["trace[113767598] 'read index received'  (duration: 302.130955ms)","trace[113767598] 'applied index is now lower than readState.Index'  (duration: 5.933063ms)"],"step_count":2}
	{"level":"warn","ts":"2024-08-05T22:53:14.531324Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"308.382815ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/health\" ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-08-05T22:53:14.531425Z","caller":"traceutil/trace.go:171","msg":"trace[1706853534] range","detail":"{range_begin:/registry/health; range_end:; response_count:0; response_revision:1264; }","duration":"308.580107ms","start":"2024-08-05T22:53:14.222834Z","end":"2024-08-05T22:53:14.531414Z","steps":["trace[1706853534] 'agreement among raft nodes before linearized reading'  (duration: 308.379139ms)"],"step_count":1}
	{"level":"warn","ts":"2024-08-05T22:53:14.531525Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-08-05T22:53:14.22281Z","time spent":"308.708886ms","remote":"127.0.0.1:53582","response type":"/etcdserverpb.KV/Range","request count":0,"request size":18,"response count":0,"response size":28,"request content":"key:\"/registry/health\" "}
	{"level":"warn","ts":"2024-08-05T22:53:14.531656Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"154.153515ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/persistentvolumes/\" range_end:\"/registry/persistentvolumes0\" count_only:true ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-08-05T22:53:14.53174Z","caller":"traceutil/trace.go:171","msg":"trace[161278946] range","detail":"{range_begin:/registry/persistentvolumes/; range_end:/registry/persistentvolumes0; response_count:0; response_revision:1264; }","duration":"154.323724ms","start":"2024-08-05T22:53:14.377404Z","end":"2024-08-05T22:53:14.531728Z","steps":["trace[161278946] 'agreement among raft nodes before linearized reading'  (duration: 154.028639ms)"],"step_count":1}
	{"level":"warn","ts":"2024-08-05T22:53:14.53203Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"149.103294ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods/ingress-nginx/\" range_end:\"/registry/pods/ingress-nginx0\" ","response":"range_response_count:3 size:14363"}
	{"level":"info","ts":"2024-08-05T22:53:14.532759Z","caller":"traceutil/trace.go:171","msg":"trace[2037048085] range","detail":"{range_begin:/registry/pods/ingress-nginx/; range_end:/registry/pods/ingress-nginx0; response_count:3; response_revision:1264; }","duration":"149.913604ms","start":"2024-08-05T22:53:14.382832Z","end":"2024-08-05T22:53:14.532745Z","steps":["trace[2037048085] 'agreement among raft nodes before linearized reading'  (duration: 149.113811ms)"],"step_count":1}
	{"level":"info","ts":"2024-08-05T22:54:02.633083Z","caller":"traceutil/trace.go:171","msg":"trace[279443207] linearizableReadLoop","detail":"{readStateIndex:1622; appliedIndex:1621; }","duration":"270.210508ms","start":"2024-08-05T22:54:02.362847Z","end":"2024-08-05T22:54:02.633057Z","steps":["trace[279443207] 'read index received'  (duration: 270.089567ms)","trace[279443207] 'applied index is now lower than readState.Index'  (duration: 120.37µs)"],"step_count":2}
	{"level":"info","ts":"2024-08-05T22:54:02.633779Z","caller":"traceutil/trace.go:171","msg":"trace[950448534] transaction","detail":"{read_only:false; response_revision:1553; number_of_response:1; }","duration":"383.315909ms","start":"2024-08-05T22:54:02.250447Z","end":"2024-08-05T22:54:02.633763Z","steps":["trace[950448534] 'process raft request'  (duration: 382.527998ms)"],"step_count":1}
	{"level":"warn","ts":"2024-08-05T22:54:02.634777Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-08-05T22:54:02.250429Z","time spent":"384.234427ms","remote":"127.0.0.1:43460","response type":"/etcdserverpb.KV/Txn","request count":1,"request size":677,"response count":0,"response size":39,"request content":"compare:<target:MOD key:\"/registry/leases/kube-system/apiserver-ikapivztbtbzzxquhxsg22mb5m\" mod_revision:1480 > success:<request_put:<key:\"/registry/leases/kube-system/apiserver-ikapivztbtbzzxquhxsg22mb5m\" value_size:604 >> failure:<request_range:<key:\"/registry/leases/kube-system/apiserver-ikapivztbtbzzxquhxsg22mb5m\" > >"}
	{"level":"warn","ts":"2024-08-05T22:54:02.63548Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"270.437051ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods/default/\" range_end:\"/registry/pods/default0\" ","response":"range_response_count:3 size:8910"}
	{"level":"info","ts":"2024-08-05T22:54:02.636263Z","caller":"traceutil/trace.go:171","msg":"trace[1506143419] range","detail":"{range_begin:/registry/pods/default/; range_end:/registry/pods/default0; response_count:3; response_revision:1553; }","duration":"273.447196ms","start":"2024-08-05T22:54:02.362802Z","end":"2024-08-05T22:54:02.636249Z","steps":["trace[1506143419] 'agreement among raft nodes before linearized reading'  (duration: 270.364005ms)"],"step_count":1}
	{"level":"info","ts":"2024-08-05T22:54:36.225748Z","caller":"traceutil/trace.go:171","msg":"trace[2002722119] transaction","detail":"{read_only:false; response_revision:1756; number_of_response:1; }","duration":"268.925174ms","start":"2024-08-05T22:54:35.956797Z","end":"2024-08-05T22:54:36.225722Z","steps":["trace[2002722119] 'process raft request'  (duration: 268.594751ms)"],"step_count":1}
	{"level":"info","ts":"2024-08-05T22:55:13.496992Z","caller":"traceutil/trace.go:171","msg":"trace[520917928] transaction","detail":"{read_only:false; response_revision:1998; number_of_response:1; }","duration":"190.518517ms","start":"2024-08-05T22:55:13.306449Z","end":"2024-08-05T22:55:13.496968Z","steps":["trace[520917928] 'process raft request'  (duration: 190.350616ms)"],"step_count":1}
	
	
	==> kernel <==
	 22:57:14 up 7 min,  0 users,  load average: 0.27, 0.76, 0.47
	Linux addons-435364 5.10.207 #1 SMP Mon Jul 29 15:19:02 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kube-apiserver [92eafd2fe5370e20300cf4b57a5758e16e3dee2bb64c465c25b601d07f7aa4c6] <==
	E0805 22:52:50.378858       1 available_controller.go:460] v1beta1.metrics.k8s.io failed with: failing or missing response from https://10.106.162.58:443/apis/metrics.k8s.io/v1beta1: Get "https://10.106.162.58:443/apis/metrics.k8s.io/v1beta1": dial tcp 10.106.162.58:443: connect: connection refused
	E0805 22:52:50.380301       1 available_controller.go:460] v1beta1.metrics.k8s.io failed with: failing or missing response from https://10.106.162.58:443/apis/metrics.k8s.io/v1beta1: Get "https://10.106.162.58:443/apis/metrics.k8s.io/v1beta1": dial tcp 10.106.162.58:443: connect: connection refused
	E0805 22:52:50.385748       1 available_controller.go:460] v1beta1.metrics.k8s.io failed with: failing or missing response from https://10.106.162.58:443/apis/metrics.k8s.io/v1beta1: Get "https://10.106.162.58:443/apis/metrics.k8s.io/v1beta1": dial tcp 10.106.162.58:443: connect: connection refused
	I0805 22:52:50.459080       1 handler.go:286] Adding GroupVersion metrics.k8s.io v1beta1 to ResourceManager
	E0805 22:53:27.476311       1 conn.go:339] Error on socket receive: read tcp 192.168.39.129:8443->192.168.39.1:53210: use of closed network connection
	E0805 22:53:27.697934       1 conn.go:339] Error on socket receive: read tcp 192.168.39.129:8443->192.168.39.1:53248: use of closed network connection
	E0805 22:54:04.812241       1 authentication.go:73] "Unable to authenticate the request" err="[invalid bearer token, serviceaccounts \"local-path-provisioner-service-account\" not found]"
	I0805 22:54:10.424991       1 controller.go:615] quota admission added evaluator for: volumesnapshots.snapshot.storage.k8s.io
	I0805 22:54:32.490718       1 alloc.go:330] "allocated clusterIPs" service="headlamp/headlamp" clusterIPs={"IPv4":"10.106.9.27"}
	I0805 22:54:38.096670       1 handler.go:286] Adding GroupVersion gadget.kinvolk.io v1alpha1 to ResourceManager
	W0805 22:54:39.159214       1 cacher.go:168] Terminating all watchers from cacher traces.gadget.kinvolk.io
	I0805 22:54:43.600695       1 controller.go:615] quota admission added evaluator for: ingresses.networking.k8s.io
	I0805 22:54:43.767931       1 alloc.go:330] "allocated clusterIPs" service="default/nginx" clusterIPs={"IPv4":"10.97.101.124"}
	I0805 22:54:45.477321       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0805 22:54:45.477522       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I0805 22:54:45.511195       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0805 22:54:45.511911       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I0805 22:54:45.529030       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0805 22:54:45.529150       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I0805 22:54:45.549126       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0805 22:54:45.549177       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	W0805 22:54:46.512892       1 cacher.go:168] Terminating all watchers from cacher volumesnapshotclasses.snapshot.storage.k8s.io
	W0805 22:54:46.550102       1 cacher.go:168] Terminating all watchers from cacher volumesnapshotcontents.snapshot.storage.k8s.io
	W0805 22:54:46.585572       1 cacher.go:168] Terminating all watchers from cacher volumesnapshots.snapshot.storage.k8s.io
	I0805 22:57:04.355195       1 alloc.go:330] "allocated clusterIPs" service="default/hello-world-app" clusterIPs={"IPv4":"10.109.16.70"}
	
	
	==> kube-controller-manager [b5b169a97f6f0fee85e8a3c58958ef344c63040a0d46d50b287ab5277d491e7d] <==
	W0805 22:55:55.430494       1 reflector.go:547] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0805 22:55:55.430651       1 reflector.go:150] k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	W0805 22:56:14.331990       1 reflector.go:547] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0805 22:56:14.332058       1 reflector.go:150] k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	W0805 22:56:28.211161       1 reflector.go:547] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0805 22:56:28.211352       1 reflector.go:150] k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	W0805 22:56:32.578581       1 reflector.go:547] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0805 22:56:32.578664       1 reflector.go:150] k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	W0805 22:56:50.276490       1 reflector.go:547] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0805 22:56:50.276725       1 reflector.go:150] k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	I0805 22:57:04.175795       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/hello-world-app-6778b5fc9f" duration="37.969043ms"
	I0805 22:57:04.196042       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/hello-world-app-6778b5fc9f" duration="20.040449ms"
	I0805 22:57:04.236126       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/hello-world-app-6778b5fc9f" duration="39.946568ms"
	I0805 22:57:04.236730       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/hello-world-app-6778b5fc9f" duration="442.642µs"
	I0805 22:57:06.018772       1 job_controller.go:566] "enqueueing job" logger="job-controller" key="ingress-nginx/ingress-nginx-admission-create"
	I0805 22:57:06.024454       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="ingress-nginx/ingress-nginx-controller-6d9bd977d4" duration="9.556µs"
	I0805 22:57:06.024996       1 job_controller.go:566] "enqueueing job" logger="job-controller" key="ingress-nginx/ingress-nginx-admission-patch"
	I0805 22:57:07.387755       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/hello-world-app-6778b5fc9f" duration="9.913507ms"
	I0805 22:57:07.388132       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/hello-world-app-6778b5fc9f" duration="36.057µs"
	W0805 22:57:08.416414       1 reflector.go:547] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0805 22:57:08.416467       1 reflector.go:150] k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	W0805 22:57:12.883980       1 reflector.go:547] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0805 22:57:12.884135       1 reflector.go:150] k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	W0805 22:57:14.078371       1 reflector.go:547] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0805 22:57:14.078413       1 reflector.go:150] k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	
	
	==> kube-proxy [ffd14a580eef1dd67f8e26cf09eeb41251619feba45e4ab0d12f7f5b32879188] <==
	I0805 22:50:43.764864       1 server_linux.go:69] "Using iptables proxy"
	I0805 22:50:43.852565       1 server.go:1062] "Successfully retrieved node IP(s)" IPs=["192.168.39.129"]
	I0805 22:50:44.051571       1 server_linux.go:143] "No iptables support for family" ipFamily="IPv6"
	I0805 22:50:44.051657       1 server.go:661] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0805 22:50:44.051673       1 server_linux.go:165] "Using iptables Proxier"
	I0805 22:50:44.059768       1 proxier.go:243] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I0805 22:50:44.059988       1 server.go:872] "Version info" version="v1.30.3"
	I0805 22:50:44.060024       1 server.go:874] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0805 22:50:44.061190       1 config.go:192] "Starting service config controller"
	I0805 22:50:44.061203       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0805 22:50:44.061227       1 config.go:101] "Starting endpoint slice config controller"
	I0805 22:50:44.061230       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0805 22:50:44.062014       1 config.go:319] "Starting node config controller"
	I0805 22:50:44.062025       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0805 22:50:44.161389       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I0805 22:50:44.161430       1 shared_informer.go:320] Caches are synced for service config
	I0805 22:50:44.162724       1 shared_informer.go:320] Caches are synced for node config
	
	
	==> kube-scheduler [e58d0c10af25f73f245cd49ac44d141e0b4dc75e8e4ac8995698b79ed373af5e] <==
	W0805 22:50:27.801729       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0805 22:50:27.801776       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	W0805 22:50:27.847448       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0805 22:50:27.847766       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	W0805 22:50:27.847709       1 reflector.go:547] runtime/asm_amd64.s:1695: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0805 22:50:27.847950       1 reflector.go:150] runtime/asm_amd64.s:1695: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	W0805 22:50:27.937590       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0805 22:50:27.938006       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	W0805 22:50:28.046502       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0805 22:50:28.046758       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	W0805 22:50:28.079240       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0805 22:50:28.079360       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	W0805 22:50:28.083787       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E0805 22:50:28.083905       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	W0805 22:50:28.139963       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E0805 22:50:28.140065       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	W0805 22:50:28.167314       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E0805 22:50:28.167440       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	W0805 22:50:28.180871       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0805 22:50:28.181015       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	W0805 22:50:28.191763       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0805 22:50:28.192243       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	W0805 22:50:28.201723       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0805 22:50:28.201840       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	I0805 22:50:30.526104       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Aug 05 22:57:04 addons-435364 kubelet[1261]: I0805 22:57:04.175506    1261 memory_manager.go:354] "RemoveStaleState removing state" podUID="8d353723-5ceb-46ce-8829-c15fde070d4d" containerName="headlamp"
	Aug 05 22:57:04 addons-435364 kubelet[1261]: I0805 22:57:04.175539    1261 memory_manager.go:354] "RemoveStaleState removing state" podUID="534510f3-5541-4591-8759-4758ac8b340d" containerName="gadget"
	Aug 05 22:57:04 addons-435364 kubelet[1261]: I0805 22:57:04.175667    1261 memory_manager.go:354] "RemoveStaleState removing state" podUID="19b31468-b55d-4eb4-a008-7b9b9af0e582" containerName="volume-snapshot-controller"
	Aug 05 22:57:04 addons-435364 kubelet[1261]: I0805 22:57:04.207249    1261 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wfrk8\" (UniqueName: \"kubernetes.io/projected/18dc8ba2-00d5-49a3-891c-7e66fff40039-kube-api-access-wfrk8\") pod \"hello-world-app-6778b5fc9f-nbsh9\" (UID: \"18dc8ba2-00d5-49a3-891c-7e66fff40039\") " pod="default/hello-world-app-6778b5fc9f-nbsh9"
	Aug 05 22:57:05 addons-435364 kubelet[1261]: I0805 22:57:05.316270    1261 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vnfbx\" (UniqueName: \"kubernetes.io/projected/a3229854-d9da-4ed8-ad6f-5a4b35dd430f-kube-api-access-vnfbx\") pod \"a3229854-d9da-4ed8-ad6f-5a4b35dd430f\" (UID: \"a3229854-d9da-4ed8-ad6f-5a4b35dd430f\") "
	Aug 05 22:57:05 addons-435364 kubelet[1261]: I0805 22:57:05.318416    1261 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a3229854-d9da-4ed8-ad6f-5a4b35dd430f-kube-api-access-vnfbx" (OuterVolumeSpecName: "kube-api-access-vnfbx") pod "a3229854-d9da-4ed8-ad6f-5a4b35dd430f" (UID: "a3229854-d9da-4ed8-ad6f-5a4b35dd430f"). InnerVolumeSpecName "kube-api-access-vnfbx". PluginName "kubernetes.io/projected", VolumeGidValue ""
	Aug 05 22:57:05 addons-435364 kubelet[1261]: I0805 22:57:05.346793    1261 scope.go:117] "RemoveContainer" containerID="843a37adb611de534859d8f8b83d9b71d53866cbe7c545e75efeac85a8c23b5e"
	Aug 05 22:57:05 addons-435364 kubelet[1261]: I0805 22:57:05.381542    1261 scope.go:117] "RemoveContainer" containerID="843a37adb611de534859d8f8b83d9b71d53866cbe7c545e75efeac85a8c23b5e"
	Aug 05 22:57:05 addons-435364 kubelet[1261]: E0805 22:57:05.382335    1261 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"843a37adb611de534859d8f8b83d9b71d53866cbe7c545e75efeac85a8c23b5e\": container with ID starting with 843a37adb611de534859d8f8b83d9b71d53866cbe7c545e75efeac85a8c23b5e not found: ID does not exist" containerID="843a37adb611de534859d8f8b83d9b71d53866cbe7c545e75efeac85a8c23b5e"
	Aug 05 22:57:05 addons-435364 kubelet[1261]: I0805 22:57:05.382446    1261 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"843a37adb611de534859d8f8b83d9b71d53866cbe7c545e75efeac85a8c23b5e"} err="failed to get container status \"843a37adb611de534859d8f8b83d9b71d53866cbe7c545e75efeac85a8c23b5e\": rpc error: code = NotFound desc = could not find container \"843a37adb611de534859d8f8b83d9b71d53866cbe7c545e75efeac85a8c23b5e\": container with ID starting with 843a37adb611de534859d8f8b83d9b71d53866cbe7c545e75efeac85a8c23b5e not found: ID does not exist"
	Aug 05 22:57:05 addons-435364 kubelet[1261]: I0805 22:57:05.417341    1261 reconciler_common.go:289] "Volume detached for volume \"kube-api-access-vnfbx\" (UniqueName: \"kubernetes.io/projected/a3229854-d9da-4ed8-ad6f-5a4b35dd430f-kube-api-access-vnfbx\") on node \"addons-435364\" DevicePath \"\""
	Aug 05 22:57:05 addons-435364 kubelet[1261]: I0805 22:57:05.657877    1261 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a3229854-d9da-4ed8-ad6f-5a4b35dd430f" path="/var/lib/kubelet/pods/a3229854-d9da-4ed8-ad6f-5a4b35dd430f/volumes"
	Aug 05 22:57:07 addons-435364 kubelet[1261]: I0805 22:57:07.659729    1261 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="56bfd78b-7481-4a9f-879d-0bcdbcf050cd" path="/var/lib/kubelet/pods/56bfd78b-7481-4a9f-879d-0bcdbcf050cd/volumes"
	Aug 05 22:57:07 addons-435364 kubelet[1261]: I0805 22:57:07.660514    1261 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ae26cc1e-446c-4ab4-8cb1-7719e5cfb06f" path="/var/lib/kubelet/pods/ae26cc1e-446c-4ab4-8cb1-7719e5cfb06f/volumes"
	Aug 05 22:57:09 addons-435364 kubelet[1261]: I0805 22:57:09.349931    1261 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"kube-api-access-hr9kb\" (UniqueName: \"kubernetes.io/projected/2fcea459-fe3a-4bc2-a5ce-8fede4b5739c-kube-api-access-hr9kb\") pod \"2fcea459-fe3a-4bc2-a5ce-8fede4b5739c\" (UID: \"2fcea459-fe3a-4bc2-a5ce-8fede4b5739c\") "
	Aug 05 22:57:09 addons-435364 kubelet[1261]: I0805 22:57:09.349989    1261 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/2fcea459-fe3a-4bc2-a5ce-8fede4b5739c-webhook-cert\") pod \"2fcea459-fe3a-4bc2-a5ce-8fede4b5739c\" (UID: \"2fcea459-fe3a-4bc2-a5ce-8fede4b5739c\") "
	Aug 05 22:57:09 addons-435364 kubelet[1261]: I0805 22:57:09.352319    1261 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2fcea459-fe3a-4bc2-a5ce-8fede4b5739c-kube-api-access-hr9kb" (OuterVolumeSpecName: "kube-api-access-hr9kb") pod "2fcea459-fe3a-4bc2-a5ce-8fede4b5739c" (UID: "2fcea459-fe3a-4bc2-a5ce-8fede4b5739c"). InnerVolumeSpecName "kube-api-access-hr9kb". PluginName "kubernetes.io/projected", VolumeGidValue ""
	Aug 05 22:57:09 addons-435364 kubelet[1261]: I0805 22:57:09.353765    1261 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2fcea459-fe3a-4bc2-a5ce-8fede4b5739c-webhook-cert" (OuterVolumeSpecName: "webhook-cert") pod "2fcea459-fe3a-4bc2-a5ce-8fede4b5739c" (UID: "2fcea459-fe3a-4bc2-a5ce-8fede4b5739c"). InnerVolumeSpecName "webhook-cert". PluginName "kubernetes.io/secret", VolumeGidValue ""
	Aug 05 22:57:09 addons-435364 kubelet[1261]: I0805 22:57:09.374353    1261 scope.go:117] "RemoveContainer" containerID="720845da1b34d678026831864c814317d368c681678de212f77fbd90ca8943f1"
	Aug 05 22:57:09 addons-435364 kubelet[1261]: I0805 22:57:09.405090    1261 scope.go:117] "RemoveContainer" containerID="720845da1b34d678026831864c814317d368c681678de212f77fbd90ca8943f1"
	Aug 05 22:57:09 addons-435364 kubelet[1261]: E0805 22:57:09.406057    1261 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"720845da1b34d678026831864c814317d368c681678de212f77fbd90ca8943f1\": container with ID starting with 720845da1b34d678026831864c814317d368c681678de212f77fbd90ca8943f1 not found: ID does not exist" containerID="720845da1b34d678026831864c814317d368c681678de212f77fbd90ca8943f1"
	Aug 05 22:57:09 addons-435364 kubelet[1261]: I0805 22:57:09.406102    1261 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"720845da1b34d678026831864c814317d368c681678de212f77fbd90ca8943f1"} err="failed to get container status \"720845da1b34d678026831864c814317d368c681678de212f77fbd90ca8943f1\": rpc error: code = NotFound desc = could not find container \"720845da1b34d678026831864c814317d368c681678de212f77fbd90ca8943f1\": container with ID starting with 720845da1b34d678026831864c814317d368c681678de212f77fbd90ca8943f1 not found: ID does not exist"
	Aug 05 22:57:09 addons-435364 kubelet[1261]: I0805 22:57:09.451065    1261 reconciler_common.go:289] "Volume detached for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/2fcea459-fe3a-4bc2-a5ce-8fede4b5739c-webhook-cert\") on node \"addons-435364\" DevicePath \"\""
	Aug 05 22:57:09 addons-435364 kubelet[1261]: I0805 22:57:09.451116    1261 reconciler_common.go:289] "Volume detached for volume \"kube-api-access-hr9kb\" (UniqueName: \"kubernetes.io/projected/2fcea459-fe3a-4bc2-a5ce-8fede4b5739c-kube-api-access-hr9kb\") on node \"addons-435364\" DevicePath \"\""
	Aug 05 22:57:09 addons-435364 kubelet[1261]: I0805 22:57:09.660189    1261 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="2fcea459-fe3a-4bc2-a5ce-8fede4b5739c" path="/var/lib/kubelet/pods/2fcea459-fe3a-4bc2-a5ce-8fede4b5739c/volumes"
	
	
	==> storage-provisioner [7b5c994323214402a42053a26dbdf6aaa73eeb251beee1a898876e1c323893d5] <==
	I0805 22:50:50.943746       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0805 22:50:50.963903       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0805 22:50:50.964017       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I0805 22:50:50.990996       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0805 22:50:50.991300       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_addons-435364_e0160293-cccb-4792-999f-05db47e0382d!
	I0805 22:50:51.002504       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"c09a9f56-5d3d-4b22-8bb5-14529760680a", APIVersion:"v1", ResourceVersion:"626", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' addons-435364_e0160293-cccb-4792-999f-05db47e0382d became leader
	I0805 22:50:51.093124       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_addons-435364_e0160293-cccb-4792-999f-05db47e0382d!
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p addons-435364 -n addons-435364
helpers_test.go:261: (dbg) Run:  kubectl --context addons-435364 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestAddons/parallel/Ingress FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
--- FAIL: TestAddons/parallel/Ingress (151.73s)

                                                
                                    
x
+
TestAddons/parallel/MetricsServer (366.97s)

                                                
                                                
=== RUN   TestAddons/parallel/MetricsServer
=== PAUSE TestAddons/parallel/MetricsServer

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/MetricsServer
addons_test.go:409: metrics-server stabilized in 3.211196ms
addons_test.go:411: (dbg) TestAddons/parallel/MetricsServer: waiting 6m0s for pods matching "k8s-app=metrics-server" in namespace "kube-system" ...
helpers_test.go:344: "metrics-server-c59844bb4-m9t52" [f825462d-de15-4aa7-9436-76eda3bbd66f] Running
addons_test.go:411: (dbg) TestAddons/parallel/MetricsServer: k8s-app=metrics-server healthy within 5.006676814s
addons_test.go:417: (dbg) Run:  kubectl --context addons-435364 top pods -n kube-system
addons_test.go:417: (dbg) Non-zero exit: kubectl --context addons-435364 top pods -n kube-system: exit status 1 (65.859714ms)

                                                
                                                
** stderr ** 
	error: Metrics not available for pod kube-system/coredns-7db6d8ff4d-ng8rk, age: 2m58.15642479s

                                                
                                                
** /stderr **
addons_test.go:417: (dbg) Run:  kubectl --context addons-435364 top pods -n kube-system
addons_test.go:417: (dbg) Non-zero exit: kubectl --context addons-435364 top pods -n kube-system: exit status 1 (69.413557ms)

                                                
                                                
** stderr ** 
	error: Metrics not available for pod kube-system/coredns-7db6d8ff4d-ng8rk, age: 3m0.159446706s

                                                
                                                
** /stderr **
addons_test.go:417: (dbg) Run:  kubectl --context addons-435364 top pods -n kube-system
addons_test.go:417: (dbg) Non-zero exit: kubectl --context addons-435364 top pods -n kube-system: exit status 1 (81.912082ms)

                                                
                                                
** stderr ** 
	error: Metrics not available for pod kube-system/coredns-7db6d8ff4d-ng8rk, age: 3m5.923871734s

                                                
                                                
** /stderr **
addons_test.go:417: (dbg) Run:  kubectl --context addons-435364 top pods -n kube-system
addons_test.go:417: (dbg) Non-zero exit: kubectl --context addons-435364 top pods -n kube-system: exit status 1 (84.552045ms)

                                                
                                                
** stderr ** 
	error: Metrics not available for pod kube-system/coredns-7db6d8ff4d-ng8rk, age: 3m14.761255716s

                                                
                                                
** /stderr **
addons_test.go:417: (dbg) Run:  kubectl --context addons-435364 top pods -n kube-system
addons_test.go:417: (dbg) Non-zero exit: kubectl --context addons-435364 top pods -n kube-system: exit status 1 (66.910829ms)

                                                
                                                
** stderr ** 
	error: Metrics not available for pod kube-system/coredns-7db6d8ff4d-ng8rk, age: 3m28.511241013s

                                                
                                                
** /stderr **
addons_test.go:417: (dbg) Run:  kubectl --context addons-435364 top pods -n kube-system
addons_test.go:417: (dbg) Non-zero exit: kubectl --context addons-435364 top pods -n kube-system: exit status 1 (62.850901ms)

                                                
                                                
** stderr ** 
	error: Metrics not available for pod kube-system/coredns-7db6d8ff4d-ng8rk, age: 3m39.797243272s

                                                
                                                
** /stderr **
addons_test.go:417: (dbg) Run:  kubectl --context addons-435364 top pods -n kube-system
addons_test.go:417: (dbg) Non-zero exit: kubectl --context addons-435364 top pods -n kube-system: exit status 1 (67.677679ms)

                                                
                                                
** stderr ** 
	error: Metrics not available for pod kube-system/coredns-7db6d8ff4d-ng8rk, age: 3m59.727374849s

                                                
                                                
** /stderr **
addons_test.go:417: (dbg) Run:  kubectl --context addons-435364 top pods -n kube-system
addons_test.go:417: (dbg) Non-zero exit: kubectl --context addons-435364 top pods -n kube-system: exit status 1 (61.90305ms)

                                                
                                                
** stderr ** 
	error: Metrics not available for pod kube-system/coredns-7db6d8ff4d-ng8rk, age: 4m27.035199914s

                                                
                                                
** /stderr **
addons_test.go:417: (dbg) Run:  kubectl --context addons-435364 top pods -n kube-system
addons_test.go:417: (dbg) Non-zero exit: kubectl --context addons-435364 top pods -n kube-system: exit status 1 (65.361959ms)

                                                
                                                
** stderr ** 
	error: Metrics not available for pod kube-system/coredns-7db6d8ff4d-ng8rk, age: 5m24.387044178s

                                                
                                                
** /stderr **
addons_test.go:417: (dbg) Run:  kubectl --context addons-435364 top pods -n kube-system
addons_test.go:417: (dbg) Non-zero exit: kubectl --context addons-435364 top pods -n kube-system: exit status 1 (65.934911ms)

                                                
                                                
** stderr ** 
	error: Metrics not available for pod kube-system/coredns-7db6d8ff4d-ng8rk, age: 6m10.321377497s

                                                
                                                
** /stderr **
addons_test.go:417: (dbg) Run:  kubectl --context addons-435364 top pods -n kube-system
addons_test.go:417: (dbg) Non-zero exit: kubectl --context addons-435364 top pods -n kube-system: exit status 1 (67.262016ms)

                                                
                                                
** stderr ** 
	error: Metrics not available for pod kube-system/coredns-7db6d8ff4d-ng8rk, age: 7m18.616702114s

                                                
                                                
** /stderr **
addons_test.go:417: (dbg) Run:  kubectl --context addons-435364 top pods -n kube-system
addons_test.go:417: (dbg) Non-zero exit: kubectl --context addons-435364 top pods -n kube-system: exit status 1 (63.169037ms)

                                                
                                                
** stderr ** 
	error: Metrics not available for pod kube-system/coredns-7db6d8ff4d-ng8rk, age: 8m25.68739466s

                                                
                                                
** /stderr **
addons_test.go:417: (dbg) Run:  kubectl --context addons-435364 top pods -n kube-system
addons_test.go:417: (dbg) Non-zero exit: kubectl --context addons-435364 top pods -n kube-system: exit status 1 (60.995252ms)

                                                
                                                
** stderr ** 
	error: Metrics not available for pod kube-system/coredns-7db6d8ff4d-ng8rk, age: 8m57.312443369s

                                                
                                                
** /stderr **
addons_test.go:431: failed checking metric server: exit status 1
addons_test.go:434: (dbg) Run:  out/minikube-linux-amd64 -p addons-435364 addons disable metrics-server --alsologtostderr -v=1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p addons-435364 -n addons-435364
helpers_test.go:244: <<< TestAddons/parallel/MetricsServer FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestAddons/parallel/MetricsServer]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p addons-435364 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p addons-435364 logs -n 25: (1.259416967s)
helpers_test.go:252: TestAddons/parallel/MetricsServer logs: 
-- stdout --
	
	==> Audit <==
	|---------|---------------------------------------------------------------------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| Command |                                            Args                                             |       Profile        |  User   | Version |     Start Time      |      End Time       |
	|---------|---------------------------------------------------------------------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| delete  | -p download-only-068196                                                                     | download-only-068196 | jenkins | v1.33.1 | 05 Aug 24 22:49 UTC | 05 Aug 24 22:49 UTC |
	| start   | --download-only -p                                                                          | binary-mirror-208535 | jenkins | v1.33.1 | 05 Aug 24 22:49 UTC |                     |
	|         | binary-mirror-208535                                                                        |                      |         |         |                     |                     |
	|         | --alsologtostderr                                                                           |                      |         |         |                     |                     |
	|         | --binary-mirror                                                                             |                      |         |         |                     |                     |
	|         | http://127.0.0.1:41223                                                                      |                      |         |         |                     |                     |
	|         | --driver=kvm2                                                                               |                      |         |         |                     |                     |
	|         | --container-runtime=crio                                                                    |                      |         |         |                     |                     |
	| delete  | -p binary-mirror-208535                                                                     | binary-mirror-208535 | jenkins | v1.33.1 | 05 Aug 24 22:49 UTC | 05 Aug 24 22:49 UTC |
	| addons  | disable dashboard -p                                                                        | addons-435364        | jenkins | v1.33.1 | 05 Aug 24 22:49 UTC |                     |
	|         | addons-435364                                                                               |                      |         |         |                     |                     |
	| addons  | enable dashboard -p                                                                         | addons-435364        | jenkins | v1.33.1 | 05 Aug 24 22:49 UTC |                     |
	|         | addons-435364                                                                               |                      |         |         |                     |                     |
	| start   | -p addons-435364 --wait=true                                                                | addons-435364        | jenkins | v1.33.1 | 05 Aug 24 22:49 UTC | 05 Aug 24 22:53 UTC |
	|         | --memory=4000 --alsologtostderr                                                             |                      |         |         |                     |                     |
	|         | --addons=registry                                                                           |                      |         |         |                     |                     |
	|         | --addons=metrics-server                                                                     |                      |         |         |                     |                     |
	|         | --addons=volumesnapshots                                                                    |                      |         |         |                     |                     |
	|         | --addons=csi-hostpath-driver                                                                |                      |         |         |                     |                     |
	|         | --addons=gcp-auth                                                                           |                      |         |         |                     |                     |
	|         | --addons=cloud-spanner                                                                      |                      |         |         |                     |                     |
	|         | --addons=inspektor-gadget                                                                   |                      |         |         |                     |                     |
	|         | --addons=storage-provisioner-rancher                                                        |                      |         |         |                     |                     |
	|         | --addons=nvidia-device-plugin                                                               |                      |         |         |                     |                     |
	|         | --addons=yakd --addons=volcano                                                              |                      |         |         |                     |                     |
	|         | --driver=kvm2                                                                               |                      |         |         |                     |                     |
	|         | --container-runtime=crio                                                                    |                      |         |         |                     |                     |
	|         | --addons=ingress                                                                            |                      |         |         |                     |                     |
	|         | --addons=ingress-dns                                                                        |                      |         |         |                     |                     |
	|         | --addons=helm-tiller                                                                        |                      |         |         |                     |                     |
	| addons  | addons-435364 addons disable                                                                | addons-435364        | jenkins | v1.33.1 | 05 Aug 24 22:53 UTC | 05 Aug 24 22:53 UTC |
	|         | gcp-auth --alsologtostderr                                                                  |                      |         |         |                     |                     |
	|         | -v=1                                                                                        |                      |         |         |                     |                     |
	| ssh     | addons-435364 ssh cat                                                                       | addons-435364        | jenkins | v1.33.1 | 05 Aug 24 22:53 UTC | 05 Aug 24 22:53 UTC |
	|         | /opt/local-path-provisioner/pvc-df517976-b98a-4ba5-bb26-cc04d40ee4f9_default_test-pvc/file1 |                      |         |         |                     |                     |
	| addons  | addons-435364 addons disable                                                                | addons-435364        | jenkins | v1.33.1 | 05 Aug 24 22:53 UTC | 05 Aug 24 22:54 UTC |
	|         | storage-provisioner-rancher                                                                 |                      |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                      |         |         |                     |                     |
	| ip      | addons-435364 ip                                                                            | addons-435364        | jenkins | v1.33.1 | 05 Aug 24 22:53 UTC | 05 Aug 24 22:53 UTC |
	| addons  | addons-435364 addons disable                                                                | addons-435364        | jenkins | v1.33.1 | 05 Aug 24 22:53 UTC | 05 Aug 24 22:53 UTC |
	|         | registry --alsologtostderr                                                                  |                      |         |         |                     |                     |
	|         | -v=1                                                                                        |                      |         |         |                     |                     |
	| addons  | addons-435364 addons disable                                                                | addons-435364        | jenkins | v1.33.1 | 05 Aug 24 22:54 UTC | 05 Aug 24 22:54 UTC |
	|         | helm-tiller --alsologtostderr                                                               |                      |         |         |                     |                     |
	|         | -v=1                                                                                        |                      |         |         |                     |                     |
	| addons  | disable nvidia-device-plugin                                                                | addons-435364        | jenkins | v1.33.1 | 05 Aug 24 22:54 UTC | 05 Aug 24 22:54 UTC |
	|         | -p addons-435364                                                                            |                      |         |         |                     |                     |
	| addons  | addons-435364 addons disable                                                                | addons-435364        | jenkins | v1.33.1 | 05 Aug 24 22:54 UTC | 05 Aug 24 22:54 UTC |
	|         | yakd --alsologtostderr -v=1                                                                 |                      |         |         |                     |                     |
	| addons  | disable cloud-spanner -p                                                                    | addons-435364        | jenkins | v1.33.1 | 05 Aug 24 22:54 UTC | 05 Aug 24 22:54 UTC |
	|         | addons-435364                                                                               |                      |         |         |                     |                     |
	| addons  | enable headlamp                                                                             | addons-435364        | jenkins | v1.33.1 | 05 Aug 24 22:54 UTC | 05 Aug 24 22:54 UTC |
	|         | -p addons-435364                                                                            |                      |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                      |         |         |                     |                     |
	| addons  | disable inspektor-gadget -p                                                                 | addons-435364        | jenkins | v1.33.1 | 05 Aug 24 22:54 UTC | 05 Aug 24 22:54 UTC |
	|         | addons-435364                                                                               |                      |         |         |                     |                     |
	| addons  | addons-435364 addons                                                                        | addons-435364        | jenkins | v1.33.1 | 05 Aug 24 22:54 UTC | 05 Aug 24 22:54 UTC |
	|         | disable csi-hostpath-driver                                                                 |                      |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                      |         |         |                     |                     |
	| addons  | addons-435364 addons                                                                        | addons-435364        | jenkins | v1.33.1 | 05 Aug 24 22:54 UTC | 05 Aug 24 22:54 UTC |
	|         | disable volumesnapshots                                                                     |                      |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                      |         |         |                     |                     |
	| addons  | addons-435364 addons disable                                                                | addons-435364        | jenkins | v1.33.1 | 05 Aug 24 22:54 UTC | 05 Aug 24 22:54 UTC |
	|         | headlamp --alsologtostderr                                                                  |                      |         |         |                     |                     |
	|         | -v=1                                                                                        |                      |         |         |                     |                     |
	| ssh     | addons-435364 ssh curl -s                                                                   | addons-435364        | jenkins | v1.33.1 | 05 Aug 24 22:54 UTC |                     |
	|         | http://127.0.0.1/ -H 'Host:                                                                 |                      |         |         |                     |                     |
	|         | nginx.example.com'                                                                          |                      |         |         |                     |                     |
	| ip      | addons-435364 ip                                                                            | addons-435364        | jenkins | v1.33.1 | 05 Aug 24 22:57 UTC | 05 Aug 24 22:57 UTC |
	| addons  | addons-435364 addons disable                                                                | addons-435364        | jenkins | v1.33.1 | 05 Aug 24 22:57 UTC | 05 Aug 24 22:57 UTC |
	|         | ingress-dns --alsologtostderr                                                               |                      |         |         |                     |                     |
	|         | -v=1                                                                                        |                      |         |         |                     |                     |
	| addons  | addons-435364 addons disable                                                                | addons-435364        | jenkins | v1.33.1 | 05 Aug 24 22:57 UTC | 05 Aug 24 22:57 UTC |
	|         | ingress --alsologtostderr -v=1                                                              |                      |         |         |                     |                     |
	| addons  | addons-435364 addons                                                                        | addons-435364        | jenkins | v1.33.1 | 05 Aug 24 22:59 UTC | 05 Aug 24 22:59 UTC |
	|         | disable metrics-server                                                                      |                      |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                      |         |         |                     |                     |
	|---------|---------------------------------------------------------------------------------------------|----------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/08/05 22:49:48
	Running on machine: ubuntu-20-agent-10
	Binary: Built with gc go1.22.5 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0805 22:49:48.506687   18059 out.go:291] Setting OutFile to fd 1 ...
	I0805 22:49:48.506951   18059 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0805 22:49:48.506961   18059 out.go:304] Setting ErrFile to fd 2...
	I0805 22:49:48.506968   18059 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0805 22:49:48.507203   18059 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19373-9606/.minikube/bin
	I0805 22:49:48.507793   18059 out.go:298] Setting JSON to false
	I0805 22:49:48.508591   18059 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-10","uptime":1934,"bootTime":1722896254,"procs":172,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1065-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0805 22:49:48.508647   18059 start.go:139] virtualization: kvm guest
	I0805 22:49:48.510554   18059 out.go:177] * [addons-435364] minikube v1.33.1 on Ubuntu 20.04 (kvm/amd64)
	I0805 22:49:48.511915   18059 out.go:177]   - MINIKUBE_LOCATION=19373
	I0805 22:49:48.511934   18059 notify.go:220] Checking for updates...
	I0805 22:49:48.514715   18059 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0805 22:49:48.515975   18059 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19373-9606/kubeconfig
	I0805 22:49:48.517159   18059 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19373-9606/.minikube
	I0805 22:49:48.518144   18059 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0805 22:49:48.519283   18059 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0805 22:49:48.520484   18059 driver.go:392] Setting default libvirt URI to qemu:///system
	I0805 22:49:48.551637   18059 out.go:177] * Using the kvm2 driver based on user configuration
	I0805 22:49:48.552951   18059 start.go:297] selected driver: kvm2
	I0805 22:49:48.552970   18059 start.go:901] validating driver "kvm2" against <nil>
	I0805 22:49:48.552988   18059 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0805 22:49:48.553710   18059 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0805 22:49:48.553823   18059 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/19373-9606/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0805 22:49:48.568117   18059 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.33.1
	I0805 22:49:48.568172   18059 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0805 22:49:48.568491   18059 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0805 22:49:48.568525   18059 cni.go:84] Creating CNI manager for ""
	I0805 22:49:48.568534   18059 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0805 22:49:48.568548   18059 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0805 22:49:48.568616   18059 start.go:340] cluster config:
	{Name:addons-435364 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:4000 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.3 ClusterName:addons-435364 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:c
rio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAg
entPID:0 GPUs: AutoPauseInterval:1m0s}
	I0805 22:49:48.568734   18059 iso.go:125] acquiring lock: {Name:mk54a637ed625e04bb2b6adf973b61c976cd6d35 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0805 22:49:48.570806   18059 out.go:177] * Starting "addons-435364" primary control-plane node in "addons-435364" cluster
	I0805 22:49:48.572189   18059 preload.go:131] Checking if preload exists for k8s version v1.30.3 and runtime crio
	I0805 22:49:48.572237   18059 preload.go:146] Found local preload: /home/jenkins/minikube-integration/19373-9606/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-cri-o-overlay-amd64.tar.lz4
	I0805 22:49:48.572248   18059 cache.go:56] Caching tarball of preloaded images
	I0805 22:49:48.572337   18059 preload.go:172] Found /home/jenkins/minikube-integration/19373-9606/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0805 22:49:48.572350   18059 cache.go:59] Finished verifying existence of preloaded tar for v1.30.3 on crio
	I0805 22:49:48.572670   18059 profile.go:143] Saving config to /home/jenkins/minikube-integration/19373-9606/.minikube/profiles/addons-435364/config.json ...
	I0805 22:49:48.572694   18059 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19373-9606/.minikube/profiles/addons-435364/config.json: {Name:mk973d1a7b74d62cfc2a1a5b42c5b5e91a472399 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0805 22:49:48.572847   18059 start.go:360] acquireMachinesLock for addons-435364: {Name:mkd2ba511c39504598222edbf83078b718329186 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0805 22:49:48.572906   18059 start.go:364] duration metric: took 42.285µs to acquireMachinesLock for "addons-435364"
	I0805 22:49:48.572927   18059 start.go:93] Provisioning new machine with config: &{Name:addons-435364 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19339/minikube-v1.33.1-1722248113-19339-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:4000 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kuber
netesVersion:v1.30.3 ClusterName:addons-435364 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 M
ountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0805 22:49:48.573017   18059 start.go:125] createHost starting for "" (driver="kvm2")
	I0805 22:49:48.574792   18059 out.go:204] * Creating kvm2 VM (CPUs=2, Memory=4000MB, Disk=20000MB) ...
	I0805 22:49:48.574960   18059 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0805 22:49:48.575012   18059 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0805 22:49:48.589322   18059 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38181
	I0805 22:49:48.589789   18059 main.go:141] libmachine: () Calling .GetVersion
	I0805 22:49:48.590344   18059 main.go:141] libmachine: Using API Version  1
	I0805 22:49:48.590368   18059 main.go:141] libmachine: () Calling .SetConfigRaw
	I0805 22:49:48.590717   18059 main.go:141] libmachine: () Calling .GetMachineName
	I0805 22:49:48.590946   18059 main.go:141] libmachine: (addons-435364) Calling .GetMachineName
	I0805 22:49:48.591091   18059 main.go:141] libmachine: (addons-435364) Calling .DriverName
	I0805 22:49:48.591253   18059 start.go:159] libmachine.API.Create for "addons-435364" (driver="kvm2")
	I0805 22:49:48.591284   18059 client.go:168] LocalClient.Create starting
	I0805 22:49:48.591329   18059 main.go:141] libmachine: Creating CA: /home/jenkins/minikube-integration/19373-9606/.minikube/certs/ca.pem
	I0805 22:49:48.777977   18059 main.go:141] libmachine: Creating client certificate: /home/jenkins/minikube-integration/19373-9606/.minikube/certs/cert.pem
	I0805 22:49:48.845591   18059 main.go:141] libmachine: Running pre-create checks...
	I0805 22:49:48.845614   18059 main.go:141] libmachine: (addons-435364) Calling .PreCreateCheck
	I0805 22:49:48.846110   18059 main.go:141] libmachine: (addons-435364) Calling .GetConfigRaw
	I0805 22:49:48.846547   18059 main.go:141] libmachine: Creating machine...
	I0805 22:49:48.846561   18059 main.go:141] libmachine: (addons-435364) Calling .Create
	I0805 22:49:48.846702   18059 main.go:141] libmachine: (addons-435364) Creating KVM machine...
	I0805 22:49:48.847962   18059 main.go:141] libmachine: (addons-435364) DBG | found existing default KVM network
	I0805 22:49:48.848688   18059 main.go:141] libmachine: (addons-435364) DBG | I0805 22:49:48.848564   18080 network.go:206] using free private subnet 192.168.39.0/24: &{IP:192.168.39.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.39.0/24 Gateway:192.168.39.1 ClientMin:192.168.39.2 ClientMax:192.168.39.254 Broadcast:192.168.39.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc000015ad0}
	I0805 22:49:48.848704   18059 main.go:141] libmachine: (addons-435364) DBG | created network xml: 
	I0805 22:49:48.848791   18059 main.go:141] libmachine: (addons-435364) DBG | <network>
	I0805 22:49:48.848851   18059 main.go:141] libmachine: (addons-435364) DBG |   <name>mk-addons-435364</name>
	I0805 22:49:48.848925   18059 main.go:141] libmachine: (addons-435364) DBG |   <dns enable='no'/>
	I0805 22:49:48.848950   18059 main.go:141] libmachine: (addons-435364) DBG |   
	I0805 22:49:48.848966   18059 main.go:141] libmachine: (addons-435364) DBG |   <ip address='192.168.39.1' netmask='255.255.255.0'>
	I0805 22:49:48.848980   18059 main.go:141] libmachine: (addons-435364) DBG |     <dhcp>
	I0805 22:49:48.848992   18059 main.go:141] libmachine: (addons-435364) DBG |       <range start='192.168.39.2' end='192.168.39.253'/>
	I0805 22:49:48.849001   18059 main.go:141] libmachine: (addons-435364) DBG |     </dhcp>
	I0805 22:49:48.849009   18059 main.go:141] libmachine: (addons-435364) DBG |   </ip>
	I0805 22:49:48.849016   18059 main.go:141] libmachine: (addons-435364) DBG |   
	I0805 22:49:48.849023   18059 main.go:141] libmachine: (addons-435364) DBG | </network>
	I0805 22:49:48.849032   18059 main.go:141] libmachine: (addons-435364) DBG | 
	I0805 22:49:48.854063   18059 main.go:141] libmachine: (addons-435364) DBG | trying to create private KVM network mk-addons-435364 192.168.39.0/24...
	I0805 22:49:48.915039   18059 main.go:141] libmachine: (addons-435364) DBG | private KVM network mk-addons-435364 192.168.39.0/24 created
	I0805 22:49:48.915081   18059 main.go:141] libmachine: (addons-435364) Setting up store path in /home/jenkins/minikube-integration/19373-9606/.minikube/machines/addons-435364 ...
	I0805 22:49:48.915099   18059 main.go:141] libmachine: (addons-435364) DBG | I0805 22:49:48.914965   18080 common.go:145] Making disk image using store path: /home/jenkins/minikube-integration/19373-9606/.minikube
	I0805 22:49:48.915133   18059 main.go:141] libmachine: (addons-435364) Building disk image from file:///home/jenkins/minikube-integration/19373-9606/.minikube/cache/iso/amd64/minikube-v1.33.1-1722248113-19339-amd64.iso
	I0805 22:49:48.915234   18059 main.go:141] libmachine: (addons-435364) Downloading /home/jenkins/minikube-integration/19373-9606/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/19373-9606/.minikube/cache/iso/amd64/minikube-v1.33.1-1722248113-19339-amd64.iso...
	I0805 22:49:49.168734   18059 main.go:141] libmachine: (addons-435364) DBG | I0805 22:49:49.168581   18080 common.go:152] Creating ssh key: /home/jenkins/minikube-integration/19373-9606/.minikube/machines/addons-435364/id_rsa...
	I0805 22:49:49.322697   18059 main.go:141] libmachine: (addons-435364) DBG | I0805 22:49:49.322569   18080 common.go:158] Creating raw disk image: /home/jenkins/minikube-integration/19373-9606/.minikube/machines/addons-435364/addons-435364.rawdisk...
	I0805 22:49:49.322722   18059 main.go:141] libmachine: (addons-435364) DBG | Writing magic tar header
	I0805 22:49:49.322732   18059 main.go:141] libmachine: (addons-435364) DBG | Writing SSH key tar header
	I0805 22:49:49.322789   18059 main.go:141] libmachine: (addons-435364) DBG | I0805 22:49:49.322748   18080 common.go:172] Fixing permissions on /home/jenkins/minikube-integration/19373-9606/.minikube/machines/addons-435364 ...
	I0805 22:49:49.322876   18059 main.go:141] libmachine: (addons-435364) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19373-9606/.minikube/machines/addons-435364
	I0805 22:49:49.322906   18059 main.go:141] libmachine: (addons-435364) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19373-9606/.minikube/machines
	I0805 22:49:49.322921   18059 main.go:141] libmachine: (addons-435364) Setting executable bit set on /home/jenkins/minikube-integration/19373-9606/.minikube/machines/addons-435364 (perms=drwx------)
	I0805 22:49:49.322927   18059 main.go:141] libmachine: (addons-435364) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19373-9606/.minikube
	I0805 22:49:49.322936   18059 main.go:141] libmachine: (addons-435364) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19373-9606
	I0805 22:49:49.322944   18059 main.go:141] libmachine: (addons-435364) DBG | Checking permissions on dir: /home/jenkins/minikube-integration
	I0805 22:49:49.322951   18059 main.go:141] libmachine: (addons-435364) DBG | Checking permissions on dir: /home/jenkins
	I0805 22:49:49.322963   18059 main.go:141] libmachine: (addons-435364) Setting executable bit set on /home/jenkins/minikube-integration/19373-9606/.minikube/machines (perms=drwxr-xr-x)
	I0805 22:49:49.322970   18059 main.go:141] libmachine: (addons-435364) Setting executable bit set on /home/jenkins/minikube-integration/19373-9606/.minikube (perms=drwxr-xr-x)
	I0805 22:49:49.322977   18059 main.go:141] libmachine: (addons-435364) Setting executable bit set on /home/jenkins/minikube-integration/19373-9606 (perms=drwxrwxr-x)
	I0805 22:49:49.322990   18059 main.go:141] libmachine: (addons-435364) Setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I0805 22:49:49.323000   18059 main.go:141] libmachine: (addons-435364) DBG | Checking permissions on dir: /home
	I0805 22:49:49.323008   18059 main.go:141] libmachine: (addons-435364) Setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I0805 22:49:49.323021   18059 main.go:141] libmachine: (addons-435364) DBG | Skipping /home - not owner
	I0805 22:49:49.323030   18059 main.go:141] libmachine: (addons-435364) Creating domain...
	I0805 22:49:49.324031   18059 main.go:141] libmachine: (addons-435364) define libvirt domain using xml: 
	I0805 22:49:49.324049   18059 main.go:141] libmachine: (addons-435364) <domain type='kvm'>
	I0805 22:49:49.324058   18059 main.go:141] libmachine: (addons-435364)   <name>addons-435364</name>
	I0805 22:49:49.324066   18059 main.go:141] libmachine: (addons-435364)   <memory unit='MiB'>4000</memory>
	I0805 22:49:49.324075   18059 main.go:141] libmachine: (addons-435364)   <vcpu>2</vcpu>
	I0805 22:49:49.324091   18059 main.go:141] libmachine: (addons-435364)   <features>
	I0805 22:49:49.324125   18059 main.go:141] libmachine: (addons-435364)     <acpi/>
	I0805 22:49:49.324155   18059 main.go:141] libmachine: (addons-435364)     <apic/>
	I0805 22:49:49.324179   18059 main.go:141] libmachine: (addons-435364)     <pae/>
	I0805 22:49:49.324194   18059 main.go:141] libmachine: (addons-435364)     
	I0805 22:49:49.324204   18059 main.go:141] libmachine: (addons-435364)   </features>
	I0805 22:49:49.324210   18059 main.go:141] libmachine: (addons-435364)   <cpu mode='host-passthrough'>
	I0805 22:49:49.324231   18059 main.go:141] libmachine: (addons-435364)   
	I0805 22:49:49.324245   18059 main.go:141] libmachine: (addons-435364)   </cpu>
	I0805 22:49:49.324254   18059 main.go:141] libmachine: (addons-435364)   <os>
	I0805 22:49:49.324260   18059 main.go:141] libmachine: (addons-435364)     <type>hvm</type>
	I0805 22:49:49.324269   18059 main.go:141] libmachine: (addons-435364)     <boot dev='cdrom'/>
	I0805 22:49:49.324279   18059 main.go:141] libmachine: (addons-435364)     <boot dev='hd'/>
	I0805 22:49:49.324296   18059 main.go:141] libmachine: (addons-435364)     <bootmenu enable='no'/>
	I0805 22:49:49.324309   18059 main.go:141] libmachine: (addons-435364)   </os>
	I0805 22:49:49.324319   18059 main.go:141] libmachine: (addons-435364)   <devices>
	I0805 22:49:49.324331   18059 main.go:141] libmachine: (addons-435364)     <disk type='file' device='cdrom'>
	I0805 22:49:49.324348   18059 main.go:141] libmachine: (addons-435364)       <source file='/home/jenkins/minikube-integration/19373-9606/.minikube/machines/addons-435364/boot2docker.iso'/>
	I0805 22:49:49.324360   18059 main.go:141] libmachine: (addons-435364)       <target dev='hdc' bus='scsi'/>
	I0805 22:49:49.324372   18059 main.go:141] libmachine: (addons-435364)       <readonly/>
	I0805 22:49:49.324382   18059 main.go:141] libmachine: (addons-435364)     </disk>
	I0805 22:49:49.324392   18059 main.go:141] libmachine: (addons-435364)     <disk type='file' device='disk'>
	I0805 22:49:49.324405   18059 main.go:141] libmachine: (addons-435364)       <driver name='qemu' type='raw' cache='default' io='threads' />
	I0805 22:49:49.324421   18059 main.go:141] libmachine: (addons-435364)       <source file='/home/jenkins/minikube-integration/19373-9606/.minikube/machines/addons-435364/addons-435364.rawdisk'/>
	I0805 22:49:49.324434   18059 main.go:141] libmachine: (addons-435364)       <target dev='hda' bus='virtio'/>
	I0805 22:49:49.324443   18059 main.go:141] libmachine: (addons-435364)     </disk>
	I0805 22:49:49.324458   18059 main.go:141] libmachine: (addons-435364)     <interface type='network'>
	I0805 22:49:49.324472   18059 main.go:141] libmachine: (addons-435364)       <source network='mk-addons-435364'/>
	I0805 22:49:49.324481   18059 main.go:141] libmachine: (addons-435364)       <model type='virtio'/>
	I0805 22:49:49.324493   18059 main.go:141] libmachine: (addons-435364)     </interface>
	I0805 22:49:49.324503   18059 main.go:141] libmachine: (addons-435364)     <interface type='network'>
	I0805 22:49:49.324516   18059 main.go:141] libmachine: (addons-435364)       <source network='default'/>
	I0805 22:49:49.324527   18059 main.go:141] libmachine: (addons-435364)       <model type='virtio'/>
	I0805 22:49:49.324539   18059 main.go:141] libmachine: (addons-435364)     </interface>
	I0805 22:49:49.324550   18059 main.go:141] libmachine: (addons-435364)     <serial type='pty'>
	I0805 22:49:49.324572   18059 main.go:141] libmachine: (addons-435364)       <target port='0'/>
	I0805 22:49:49.324587   18059 main.go:141] libmachine: (addons-435364)     </serial>
	I0805 22:49:49.324599   18059 main.go:141] libmachine: (addons-435364)     <console type='pty'>
	I0805 22:49:49.324610   18059 main.go:141] libmachine: (addons-435364)       <target type='serial' port='0'/>
	I0805 22:49:49.324622   18059 main.go:141] libmachine: (addons-435364)     </console>
	I0805 22:49:49.324633   18059 main.go:141] libmachine: (addons-435364)     <rng model='virtio'>
	I0805 22:49:49.324644   18059 main.go:141] libmachine: (addons-435364)       <backend model='random'>/dev/random</backend>
	I0805 22:49:49.324664   18059 main.go:141] libmachine: (addons-435364)     </rng>
	I0805 22:49:49.324676   18059 main.go:141] libmachine: (addons-435364)     
	I0805 22:49:49.324684   18059 main.go:141] libmachine: (addons-435364)     
	I0805 22:49:49.324696   18059 main.go:141] libmachine: (addons-435364)   </devices>
	I0805 22:49:49.324706   18059 main.go:141] libmachine: (addons-435364) </domain>
	I0805 22:49:49.324719   18059 main.go:141] libmachine: (addons-435364) 
	I0805 22:49:49.330031   18059 main.go:141] libmachine: (addons-435364) DBG | domain addons-435364 has defined MAC address 52:54:00:64:94:3c in network default
	I0805 22:49:49.330509   18059 main.go:141] libmachine: (addons-435364) Ensuring networks are active...
	I0805 22:49:49.330532   18059 main.go:141] libmachine: (addons-435364) DBG | domain addons-435364 has defined MAC address 52:54:00:99:11:e1 in network mk-addons-435364
	I0805 22:49:49.331146   18059 main.go:141] libmachine: (addons-435364) Ensuring network default is active
	I0805 22:49:49.331442   18059 main.go:141] libmachine: (addons-435364) Ensuring network mk-addons-435364 is active
	I0805 22:49:49.331894   18059 main.go:141] libmachine: (addons-435364) Getting domain xml...
	I0805 22:49:49.332619   18059 main.go:141] libmachine: (addons-435364) Creating domain...
	I0805 22:49:50.750593   18059 main.go:141] libmachine: (addons-435364) Waiting to get IP...
	I0805 22:49:50.751328   18059 main.go:141] libmachine: (addons-435364) DBG | domain addons-435364 has defined MAC address 52:54:00:99:11:e1 in network mk-addons-435364
	I0805 22:49:50.751754   18059 main.go:141] libmachine: (addons-435364) DBG | unable to find current IP address of domain addons-435364 in network mk-addons-435364
	I0805 22:49:50.751795   18059 main.go:141] libmachine: (addons-435364) DBG | I0805 22:49:50.751705   18080 retry.go:31] will retry after 214.228264ms: waiting for machine to come up
	I0805 22:49:50.967104   18059 main.go:141] libmachine: (addons-435364) DBG | domain addons-435364 has defined MAC address 52:54:00:99:11:e1 in network mk-addons-435364
	I0805 22:49:50.967520   18059 main.go:141] libmachine: (addons-435364) DBG | unable to find current IP address of domain addons-435364 in network mk-addons-435364
	I0805 22:49:50.967548   18059 main.go:141] libmachine: (addons-435364) DBG | I0805 22:49:50.967480   18080 retry.go:31] will retry after 306.207664ms: waiting for machine to come up
	I0805 22:49:51.274919   18059 main.go:141] libmachine: (addons-435364) DBG | domain addons-435364 has defined MAC address 52:54:00:99:11:e1 in network mk-addons-435364
	I0805 22:49:51.275342   18059 main.go:141] libmachine: (addons-435364) DBG | unable to find current IP address of domain addons-435364 in network mk-addons-435364
	I0805 22:49:51.275381   18059 main.go:141] libmachine: (addons-435364) DBG | I0805 22:49:51.275318   18080 retry.go:31] will retry after 476.689069ms: waiting for machine to come up
	I0805 22:49:51.753916   18059 main.go:141] libmachine: (addons-435364) DBG | domain addons-435364 has defined MAC address 52:54:00:99:11:e1 in network mk-addons-435364
	I0805 22:49:51.754387   18059 main.go:141] libmachine: (addons-435364) DBG | unable to find current IP address of domain addons-435364 in network mk-addons-435364
	I0805 22:49:51.754409   18059 main.go:141] libmachine: (addons-435364) DBG | I0805 22:49:51.754345   18080 retry.go:31] will retry after 606.609457ms: waiting for machine to come up
	I0805 22:49:52.362172   18059 main.go:141] libmachine: (addons-435364) DBG | domain addons-435364 has defined MAC address 52:54:00:99:11:e1 in network mk-addons-435364
	I0805 22:49:52.362574   18059 main.go:141] libmachine: (addons-435364) DBG | unable to find current IP address of domain addons-435364 in network mk-addons-435364
	I0805 22:49:52.362610   18059 main.go:141] libmachine: (addons-435364) DBG | I0805 22:49:52.362542   18080 retry.go:31] will retry after 575.123699ms: waiting for machine to come up
	I0805 22:49:52.939358   18059 main.go:141] libmachine: (addons-435364) DBG | domain addons-435364 has defined MAC address 52:54:00:99:11:e1 in network mk-addons-435364
	I0805 22:49:52.939660   18059 main.go:141] libmachine: (addons-435364) DBG | unable to find current IP address of domain addons-435364 in network mk-addons-435364
	I0805 22:49:52.939684   18059 main.go:141] libmachine: (addons-435364) DBG | I0805 22:49:52.939637   18080 retry.go:31] will retry after 774.827552ms: waiting for machine to come up
	I0805 22:49:53.716066   18059 main.go:141] libmachine: (addons-435364) DBG | domain addons-435364 has defined MAC address 52:54:00:99:11:e1 in network mk-addons-435364
	I0805 22:49:53.716474   18059 main.go:141] libmachine: (addons-435364) DBG | unable to find current IP address of domain addons-435364 in network mk-addons-435364
	I0805 22:49:53.716504   18059 main.go:141] libmachine: (addons-435364) DBG | I0805 22:49:53.716445   18080 retry.go:31] will retry after 1.065801193s: waiting for machine to come up
	I0805 22:49:54.783763   18059 main.go:141] libmachine: (addons-435364) DBG | domain addons-435364 has defined MAC address 52:54:00:99:11:e1 in network mk-addons-435364
	I0805 22:49:54.784199   18059 main.go:141] libmachine: (addons-435364) DBG | unable to find current IP address of domain addons-435364 in network mk-addons-435364
	I0805 22:49:54.784217   18059 main.go:141] libmachine: (addons-435364) DBG | I0805 22:49:54.784166   18080 retry.go:31] will retry after 903.298303ms: waiting for machine to come up
	I0805 22:49:55.689188   18059 main.go:141] libmachine: (addons-435364) DBG | domain addons-435364 has defined MAC address 52:54:00:99:11:e1 in network mk-addons-435364
	I0805 22:49:55.689539   18059 main.go:141] libmachine: (addons-435364) DBG | unable to find current IP address of domain addons-435364 in network mk-addons-435364
	I0805 22:49:55.689565   18059 main.go:141] libmachine: (addons-435364) DBG | I0805 22:49:55.689499   18080 retry.go:31] will retry after 1.568408021s: waiting for machine to come up
	I0805 22:49:57.260214   18059 main.go:141] libmachine: (addons-435364) DBG | domain addons-435364 has defined MAC address 52:54:00:99:11:e1 in network mk-addons-435364
	I0805 22:49:57.260632   18059 main.go:141] libmachine: (addons-435364) DBG | unable to find current IP address of domain addons-435364 in network mk-addons-435364
	I0805 22:49:57.260652   18059 main.go:141] libmachine: (addons-435364) DBG | I0805 22:49:57.260613   18080 retry.go:31] will retry after 2.221891592s: waiting for machine to come up
	I0805 22:49:59.484039   18059 main.go:141] libmachine: (addons-435364) DBG | domain addons-435364 has defined MAC address 52:54:00:99:11:e1 in network mk-addons-435364
	I0805 22:49:59.484439   18059 main.go:141] libmachine: (addons-435364) DBG | unable to find current IP address of domain addons-435364 in network mk-addons-435364
	I0805 22:49:59.484472   18059 main.go:141] libmachine: (addons-435364) DBG | I0805 22:49:59.484362   18080 retry.go:31] will retry after 2.439349351s: waiting for machine to come up
	I0805 22:50:01.926995   18059 main.go:141] libmachine: (addons-435364) DBG | domain addons-435364 has defined MAC address 52:54:00:99:11:e1 in network mk-addons-435364
	I0805 22:50:01.927430   18059 main.go:141] libmachine: (addons-435364) DBG | unable to find current IP address of domain addons-435364 in network mk-addons-435364
	I0805 22:50:01.927452   18059 main.go:141] libmachine: (addons-435364) DBG | I0805 22:50:01.927393   18080 retry.go:31] will retry after 2.459070989s: waiting for machine to come up
	I0805 22:50:04.388244   18059 main.go:141] libmachine: (addons-435364) DBG | domain addons-435364 has defined MAC address 52:54:00:99:11:e1 in network mk-addons-435364
	I0805 22:50:04.388626   18059 main.go:141] libmachine: (addons-435364) DBG | unable to find current IP address of domain addons-435364 in network mk-addons-435364
	I0805 22:50:04.388646   18059 main.go:141] libmachine: (addons-435364) DBG | I0805 22:50:04.388589   18080 retry.go:31] will retry after 3.49088023s: waiting for machine to come up
	I0805 22:50:07.880582   18059 main.go:141] libmachine: (addons-435364) DBG | domain addons-435364 has defined MAC address 52:54:00:99:11:e1 in network mk-addons-435364
	I0805 22:50:07.880947   18059 main.go:141] libmachine: (addons-435364) DBG | unable to find current IP address of domain addons-435364 in network mk-addons-435364
	I0805 22:50:07.880973   18059 main.go:141] libmachine: (addons-435364) DBG | I0805 22:50:07.880896   18080 retry.go:31] will retry after 4.573943769s: waiting for machine to come up
	I0805 22:50:12.459645   18059 main.go:141] libmachine: (addons-435364) DBG | domain addons-435364 has defined MAC address 52:54:00:99:11:e1 in network mk-addons-435364
	I0805 22:50:12.460081   18059 main.go:141] libmachine: (addons-435364) Found IP for machine: 192.168.39.129
	I0805 22:50:12.460103   18059 main.go:141] libmachine: (addons-435364) Reserving static IP address...
	I0805 22:50:12.460116   18059 main.go:141] libmachine: (addons-435364) DBG | domain addons-435364 has current primary IP address 192.168.39.129 and MAC address 52:54:00:99:11:e1 in network mk-addons-435364
	I0805 22:50:12.460438   18059 main.go:141] libmachine: (addons-435364) DBG | unable to find host DHCP lease matching {name: "addons-435364", mac: "52:54:00:99:11:e1", ip: "192.168.39.129"} in network mk-addons-435364
	I0805 22:50:12.531547   18059 main.go:141] libmachine: (addons-435364) Reserved static IP address: 192.168.39.129
	I0805 22:50:12.531580   18059 main.go:141] libmachine: (addons-435364) DBG | Getting to WaitForSSH function...
	I0805 22:50:12.531589   18059 main.go:141] libmachine: (addons-435364) Waiting for SSH to be available...
	I0805 22:50:12.534126   18059 main.go:141] libmachine: (addons-435364) DBG | domain addons-435364 has defined MAC address 52:54:00:99:11:e1 in network mk-addons-435364
	I0805 22:50:12.534561   18059 main.go:141] libmachine: (addons-435364) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:99:11:e1", ip: ""} in network mk-addons-435364: {Iface:virbr1 ExpiryTime:2024-08-05 23:50:03 +0000 UTC Type:0 Mac:52:54:00:99:11:e1 Iaid: IPaddr:192.168.39.129 Prefix:24 Hostname:minikube Clientid:01:52:54:00:99:11:e1}
	I0805 22:50:12.534589   18059 main.go:141] libmachine: (addons-435364) DBG | domain addons-435364 has defined IP address 192.168.39.129 and MAC address 52:54:00:99:11:e1 in network mk-addons-435364
	I0805 22:50:12.534834   18059 main.go:141] libmachine: (addons-435364) DBG | Using SSH client type: external
	I0805 22:50:12.534857   18059 main.go:141] libmachine: (addons-435364) DBG | Using SSH private key: /home/jenkins/minikube-integration/19373-9606/.minikube/machines/addons-435364/id_rsa (-rw-------)
	I0805 22:50:12.534899   18059 main.go:141] libmachine: (addons-435364) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.129 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19373-9606/.minikube/machines/addons-435364/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0805 22:50:12.534917   18059 main.go:141] libmachine: (addons-435364) DBG | About to run SSH command:
	I0805 22:50:12.534926   18059 main.go:141] libmachine: (addons-435364) DBG | exit 0
	I0805 22:50:12.667265   18059 main.go:141] libmachine: (addons-435364) DBG | SSH cmd err, output: <nil>: 
	I0805 22:50:12.667562   18059 main.go:141] libmachine: (addons-435364) KVM machine creation complete!
	I0805 22:50:12.667890   18059 main.go:141] libmachine: (addons-435364) Calling .GetConfigRaw
	I0805 22:50:12.668441   18059 main.go:141] libmachine: (addons-435364) Calling .DriverName
	I0805 22:50:12.668643   18059 main.go:141] libmachine: (addons-435364) Calling .DriverName
	I0805 22:50:12.668810   18059 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I0805 22:50:12.668824   18059 main.go:141] libmachine: (addons-435364) Calling .GetState
	I0805 22:50:12.669869   18059 main.go:141] libmachine: Detecting operating system of created instance...
	I0805 22:50:12.669881   18059 main.go:141] libmachine: Waiting for SSH to be available...
	I0805 22:50:12.669886   18059 main.go:141] libmachine: Getting to WaitForSSH function...
	I0805 22:50:12.669891   18059 main.go:141] libmachine: (addons-435364) Calling .GetSSHHostname
	I0805 22:50:12.672381   18059 main.go:141] libmachine: (addons-435364) DBG | domain addons-435364 has defined MAC address 52:54:00:99:11:e1 in network mk-addons-435364
	I0805 22:50:12.672722   18059 main.go:141] libmachine: (addons-435364) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:99:11:e1", ip: ""} in network mk-addons-435364: {Iface:virbr1 ExpiryTime:2024-08-05 23:50:03 +0000 UTC Type:0 Mac:52:54:00:99:11:e1 Iaid: IPaddr:192.168.39.129 Prefix:24 Hostname:addons-435364 Clientid:01:52:54:00:99:11:e1}
	I0805 22:50:12.672769   18059 main.go:141] libmachine: (addons-435364) DBG | domain addons-435364 has defined IP address 192.168.39.129 and MAC address 52:54:00:99:11:e1 in network mk-addons-435364
	I0805 22:50:12.672897   18059 main.go:141] libmachine: (addons-435364) Calling .GetSSHPort
	I0805 22:50:12.673061   18059 main.go:141] libmachine: (addons-435364) Calling .GetSSHKeyPath
	I0805 22:50:12.673205   18059 main.go:141] libmachine: (addons-435364) Calling .GetSSHKeyPath
	I0805 22:50:12.673332   18059 main.go:141] libmachine: (addons-435364) Calling .GetSSHUsername
	I0805 22:50:12.673480   18059 main.go:141] libmachine: Using SSH client type: native
	I0805 22:50:12.673641   18059 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.129 22 <nil> <nil>}
	I0805 22:50:12.673650   18059 main.go:141] libmachine: About to run SSH command:
	exit 0
	I0805 22:50:12.778666   18059 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0805 22:50:12.778694   18059 main.go:141] libmachine: Detecting the provisioner...
	I0805 22:50:12.778703   18059 main.go:141] libmachine: (addons-435364) Calling .GetSSHHostname
	I0805 22:50:12.781514   18059 main.go:141] libmachine: (addons-435364) DBG | domain addons-435364 has defined MAC address 52:54:00:99:11:e1 in network mk-addons-435364
	I0805 22:50:12.782004   18059 main.go:141] libmachine: (addons-435364) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:99:11:e1", ip: ""} in network mk-addons-435364: {Iface:virbr1 ExpiryTime:2024-08-05 23:50:03 +0000 UTC Type:0 Mac:52:54:00:99:11:e1 Iaid: IPaddr:192.168.39.129 Prefix:24 Hostname:addons-435364 Clientid:01:52:54:00:99:11:e1}
	I0805 22:50:12.782038   18059 main.go:141] libmachine: (addons-435364) DBG | domain addons-435364 has defined IP address 192.168.39.129 and MAC address 52:54:00:99:11:e1 in network mk-addons-435364
	I0805 22:50:12.782180   18059 main.go:141] libmachine: (addons-435364) Calling .GetSSHPort
	I0805 22:50:12.782390   18059 main.go:141] libmachine: (addons-435364) Calling .GetSSHKeyPath
	I0805 22:50:12.782592   18059 main.go:141] libmachine: (addons-435364) Calling .GetSSHKeyPath
	I0805 22:50:12.782768   18059 main.go:141] libmachine: (addons-435364) Calling .GetSSHUsername
	I0805 22:50:12.783104   18059 main.go:141] libmachine: Using SSH client type: native
	I0805 22:50:12.783294   18059 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.129 22 <nil> <nil>}
	I0805 22:50:12.783306   18059 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I0805 22:50:12.887921   18059 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2023.02.9-dirty
	ID=buildroot
	VERSION_ID=2023.02.9
	PRETTY_NAME="Buildroot 2023.02.9"
	
	I0805 22:50:12.888035   18059 main.go:141] libmachine: found compatible host: buildroot
	I0805 22:50:12.888053   18059 main.go:141] libmachine: Provisioning with buildroot...
	I0805 22:50:12.888064   18059 main.go:141] libmachine: (addons-435364) Calling .GetMachineName
	I0805 22:50:12.888331   18059 buildroot.go:166] provisioning hostname "addons-435364"
	I0805 22:50:12.888358   18059 main.go:141] libmachine: (addons-435364) Calling .GetMachineName
	I0805 22:50:12.888560   18059 main.go:141] libmachine: (addons-435364) Calling .GetSSHHostname
	I0805 22:50:12.891036   18059 main.go:141] libmachine: (addons-435364) DBG | domain addons-435364 has defined MAC address 52:54:00:99:11:e1 in network mk-addons-435364
	I0805 22:50:12.891368   18059 main.go:141] libmachine: (addons-435364) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:99:11:e1", ip: ""} in network mk-addons-435364: {Iface:virbr1 ExpiryTime:2024-08-05 23:50:03 +0000 UTC Type:0 Mac:52:54:00:99:11:e1 Iaid: IPaddr:192.168.39.129 Prefix:24 Hostname:addons-435364 Clientid:01:52:54:00:99:11:e1}
	I0805 22:50:12.891391   18059 main.go:141] libmachine: (addons-435364) DBG | domain addons-435364 has defined IP address 192.168.39.129 and MAC address 52:54:00:99:11:e1 in network mk-addons-435364
	I0805 22:50:12.891551   18059 main.go:141] libmachine: (addons-435364) Calling .GetSSHPort
	I0805 22:50:12.891727   18059 main.go:141] libmachine: (addons-435364) Calling .GetSSHKeyPath
	I0805 22:50:12.891890   18059 main.go:141] libmachine: (addons-435364) Calling .GetSSHKeyPath
	I0805 22:50:12.892026   18059 main.go:141] libmachine: (addons-435364) Calling .GetSSHUsername
	I0805 22:50:12.892199   18059 main.go:141] libmachine: Using SSH client type: native
	I0805 22:50:12.892447   18059 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.129 22 <nil> <nil>}
	I0805 22:50:12.892464   18059 main.go:141] libmachine: About to run SSH command:
	sudo hostname addons-435364 && echo "addons-435364" | sudo tee /etc/hostname
	I0805 22:50:13.010217   18059 main.go:141] libmachine: SSH cmd err, output: <nil>: addons-435364
	
	I0805 22:50:13.010241   18059 main.go:141] libmachine: (addons-435364) Calling .GetSSHHostname
	I0805 22:50:13.012956   18059 main.go:141] libmachine: (addons-435364) DBG | domain addons-435364 has defined MAC address 52:54:00:99:11:e1 in network mk-addons-435364
	I0805 22:50:13.013254   18059 main.go:141] libmachine: (addons-435364) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:99:11:e1", ip: ""} in network mk-addons-435364: {Iface:virbr1 ExpiryTime:2024-08-05 23:50:03 +0000 UTC Type:0 Mac:52:54:00:99:11:e1 Iaid: IPaddr:192.168.39.129 Prefix:24 Hostname:addons-435364 Clientid:01:52:54:00:99:11:e1}
	I0805 22:50:13.013270   18059 main.go:141] libmachine: (addons-435364) DBG | domain addons-435364 has defined IP address 192.168.39.129 and MAC address 52:54:00:99:11:e1 in network mk-addons-435364
	I0805 22:50:13.013412   18059 main.go:141] libmachine: (addons-435364) Calling .GetSSHPort
	I0805 22:50:13.013605   18059 main.go:141] libmachine: (addons-435364) Calling .GetSSHKeyPath
	I0805 22:50:13.013769   18059 main.go:141] libmachine: (addons-435364) Calling .GetSSHKeyPath
	I0805 22:50:13.013943   18059 main.go:141] libmachine: (addons-435364) Calling .GetSSHUsername
	I0805 22:50:13.014074   18059 main.go:141] libmachine: Using SSH client type: native
	I0805 22:50:13.014260   18059 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.129 22 <nil> <nil>}
	I0805 22:50:13.014275   18059 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\saddons-435364' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 addons-435364/g' /etc/hosts;
				else 
					echo '127.0.1.1 addons-435364' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0805 22:50:13.131877   18059 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0805 22:50:13.131907   18059 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19373-9606/.minikube CaCertPath:/home/jenkins/minikube-integration/19373-9606/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19373-9606/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19373-9606/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19373-9606/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19373-9606/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19373-9606/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19373-9606/.minikube}
	I0805 22:50:13.131950   18059 buildroot.go:174] setting up certificates
	I0805 22:50:13.131963   18059 provision.go:84] configureAuth start
	I0805 22:50:13.131980   18059 main.go:141] libmachine: (addons-435364) Calling .GetMachineName
	I0805 22:50:13.132283   18059 main.go:141] libmachine: (addons-435364) Calling .GetIP
	I0805 22:50:13.134856   18059 main.go:141] libmachine: (addons-435364) DBG | domain addons-435364 has defined MAC address 52:54:00:99:11:e1 in network mk-addons-435364
	I0805 22:50:13.135215   18059 main.go:141] libmachine: (addons-435364) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:99:11:e1", ip: ""} in network mk-addons-435364: {Iface:virbr1 ExpiryTime:2024-08-05 23:50:03 +0000 UTC Type:0 Mac:52:54:00:99:11:e1 Iaid: IPaddr:192.168.39.129 Prefix:24 Hostname:addons-435364 Clientid:01:52:54:00:99:11:e1}
	I0805 22:50:13.135247   18059 main.go:141] libmachine: (addons-435364) DBG | domain addons-435364 has defined IP address 192.168.39.129 and MAC address 52:54:00:99:11:e1 in network mk-addons-435364
	I0805 22:50:13.135438   18059 main.go:141] libmachine: (addons-435364) Calling .GetSSHHostname
	I0805 22:50:13.137936   18059 main.go:141] libmachine: (addons-435364) DBG | domain addons-435364 has defined MAC address 52:54:00:99:11:e1 in network mk-addons-435364
	I0805 22:50:13.138352   18059 main.go:141] libmachine: (addons-435364) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:99:11:e1", ip: ""} in network mk-addons-435364: {Iface:virbr1 ExpiryTime:2024-08-05 23:50:03 +0000 UTC Type:0 Mac:52:54:00:99:11:e1 Iaid: IPaddr:192.168.39.129 Prefix:24 Hostname:addons-435364 Clientid:01:52:54:00:99:11:e1}
	I0805 22:50:13.138367   18059 main.go:141] libmachine: (addons-435364) DBG | domain addons-435364 has defined IP address 192.168.39.129 and MAC address 52:54:00:99:11:e1 in network mk-addons-435364
	I0805 22:50:13.138549   18059 provision.go:143] copyHostCerts
	I0805 22:50:13.138633   18059 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19373-9606/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19373-9606/.minikube/ca.pem (1082 bytes)
	I0805 22:50:13.138778   18059 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19373-9606/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19373-9606/.minikube/cert.pem (1123 bytes)
	I0805 22:50:13.138846   18059 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19373-9606/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19373-9606/.minikube/key.pem (1679 bytes)
	I0805 22:50:13.138916   18059 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19373-9606/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19373-9606/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19373-9606/.minikube/certs/ca-key.pem org=jenkins.addons-435364 san=[127.0.0.1 192.168.39.129 addons-435364 localhost minikube]
	I0805 22:50:13.252392   18059 provision.go:177] copyRemoteCerts
	I0805 22:50:13.252452   18059 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0805 22:50:13.252475   18059 main.go:141] libmachine: (addons-435364) Calling .GetSSHHostname
	I0805 22:50:13.255186   18059 main.go:141] libmachine: (addons-435364) DBG | domain addons-435364 has defined MAC address 52:54:00:99:11:e1 in network mk-addons-435364
	I0805 22:50:13.255488   18059 main.go:141] libmachine: (addons-435364) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:99:11:e1", ip: ""} in network mk-addons-435364: {Iface:virbr1 ExpiryTime:2024-08-05 23:50:03 +0000 UTC Type:0 Mac:52:54:00:99:11:e1 Iaid: IPaddr:192.168.39.129 Prefix:24 Hostname:addons-435364 Clientid:01:52:54:00:99:11:e1}
	I0805 22:50:13.255522   18059 main.go:141] libmachine: (addons-435364) DBG | domain addons-435364 has defined IP address 192.168.39.129 and MAC address 52:54:00:99:11:e1 in network mk-addons-435364
	I0805 22:50:13.255756   18059 main.go:141] libmachine: (addons-435364) Calling .GetSSHPort
	I0805 22:50:13.256003   18059 main.go:141] libmachine: (addons-435364) Calling .GetSSHKeyPath
	I0805 22:50:13.256161   18059 main.go:141] libmachine: (addons-435364) Calling .GetSSHUsername
	I0805 22:50:13.256359   18059 sshutil.go:53] new ssh client: &{IP:192.168.39.129 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19373-9606/.minikube/machines/addons-435364/id_rsa Username:docker}
	I0805 22:50:13.337367   18059 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19373-9606/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0805 22:50:13.364380   18059 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19373-9606/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0805 22:50:13.390303   18059 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19373-9606/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I0805 22:50:13.414478   18059 provision.go:87] duration metric: took 282.497674ms to configureAuth
	I0805 22:50:13.414504   18059 buildroot.go:189] setting minikube options for container-runtime
	I0805 22:50:13.414670   18059 config.go:182] Loaded profile config "addons-435364": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0805 22:50:13.414755   18059 main.go:141] libmachine: (addons-435364) Calling .GetSSHHostname
	I0805 22:50:13.417135   18059 main.go:141] libmachine: (addons-435364) DBG | domain addons-435364 has defined MAC address 52:54:00:99:11:e1 in network mk-addons-435364
	I0805 22:50:13.417454   18059 main.go:141] libmachine: (addons-435364) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:99:11:e1", ip: ""} in network mk-addons-435364: {Iface:virbr1 ExpiryTime:2024-08-05 23:50:03 +0000 UTC Type:0 Mac:52:54:00:99:11:e1 Iaid: IPaddr:192.168.39.129 Prefix:24 Hostname:addons-435364 Clientid:01:52:54:00:99:11:e1}
	I0805 22:50:13.417482   18059 main.go:141] libmachine: (addons-435364) DBG | domain addons-435364 has defined IP address 192.168.39.129 and MAC address 52:54:00:99:11:e1 in network mk-addons-435364
	I0805 22:50:13.417628   18059 main.go:141] libmachine: (addons-435364) Calling .GetSSHPort
	I0805 22:50:13.417793   18059 main.go:141] libmachine: (addons-435364) Calling .GetSSHKeyPath
	I0805 22:50:13.417964   18059 main.go:141] libmachine: (addons-435364) Calling .GetSSHKeyPath
	I0805 22:50:13.418107   18059 main.go:141] libmachine: (addons-435364) Calling .GetSSHUsername
	I0805 22:50:13.418289   18059 main.go:141] libmachine: Using SSH client type: native
	I0805 22:50:13.418442   18059 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.129 22 <nil> <nil>}
	I0805 22:50:13.418457   18059 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0805 22:50:13.688306   18059 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0805 22:50:13.688328   18059 main.go:141] libmachine: Checking connection to Docker...
	I0805 22:50:13.688336   18059 main.go:141] libmachine: (addons-435364) Calling .GetURL
	I0805 22:50:13.689629   18059 main.go:141] libmachine: (addons-435364) DBG | Using libvirt version 6000000
	I0805 22:50:13.692003   18059 main.go:141] libmachine: (addons-435364) DBG | domain addons-435364 has defined MAC address 52:54:00:99:11:e1 in network mk-addons-435364
	I0805 22:50:13.692365   18059 main.go:141] libmachine: (addons-435364) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:99:11:e1", ip: ""} in network mk-addons-435364: {Iface:virbr1 ExpiryTime:2024-08-05 23:50:03 +0000 UTC Type:0 Mac:52:54:00:99:11:e1 Iaid: IPaddr:192.168.39.129 Prefix:24 Hostname:addons-435364 Clientid:01:52:54:00:99:11:e1}
	I0805 22:50:13.692395   18059 main.go:141] libmachine: (addons-435364) DBG | domain addons-435364 has defined IP address 192.168.39.129 and MAC address 52:54:00:99:11:e1 in network mk-addons-435364
	I0805 22:50:13.692515   18059 main.go:141] libmachine: Docker is up and running!
	I0805 22:50:13.692530   18059 main.go:141] libmachine: Reticulating splines...
	I0805 22:50:13.692538   18059 client.go:171] duration metric: took 25.101243283s to LocalClient.Create
	I0805 22:50:13.692560   18059 start.go:167] duration metric: took 25.101307848s to libmachine.API.Create "addons-435364"
	I0805 22:50:13.692568   18059 start.go:293] postStartSetup for "addons-435364" (driver="kvm2")
	I0805 22:50:13.692576   18059 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0805 22:50:13.692593   18059 main.go:141] libmachine: (addons-435364) Calling .DriverName
	I0805 22:50:13.692798   18059 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0805 22:50:13.692822   18059 main.go:141] libmachine: (addons-435364) Calling .GetSSHHostname
	I0805 22:50:13.695008   18059 main.go:141] libmachine: (addons-435364) DBG | domain addons-435364 has defined MAC address 52:54:00:99:11:e1 in network mk-addons-435364
	I0805 22:50:13.695365   18059 main.go:141] libmachine: (addons-435364) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:99:11:e1", ip: ""} in network mk-addons-435364: {Iface:virbr1 ExpiryTime:2024-08-05 23:50:03 +0000 UTC Type:0 Mac:52:54:00:99:11:e1 Iaid: IPaddr:192.168.39.129 Prefix:24 Hostname:addons-435364 Clientid:01:52:54:00:99:11:e1}
	I0805 22:50:13.695387   18059 main.go:141] libmachine: (addons-435364) DBG | domain addons-435364 has defined IP address 192.168.39.129 and MAC address 52:54:00:99:11:e1 in network mk-addons-435364
	I0805 22:50:13.695540   18059 main.go:141] libmachine: (addons-435364) Calling .GetSSHPort
	I0805 22:50:13.695731   18059 main.go:141] libmachine: (addons-435364) Calling .GetSSHKeyPath
	I0805 22:50:13.695899   18059 main.go:141] libmachine: (addons-435364) Calling .GetSSHUsername
	I0805 22:50:13.696038   18059 sshutil.go:53] new ssh client: &{IP:192.168.39.129 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19373-9606/.minikube/machines/addons-435364/id_rsa Username:docker}
	I0805 22:50:13.777738   18059 ssh_runner.go:195] Run: cat /etc/os-release
	I0805 22:50:13.782406   18059 info.go:137] Remote host: Buildroot 2023.02.9
	I0805 22:50:13.782428   18059 filesync.go:126] Scanning /home/jenkins/minikube-integration/19373-9606/.minikube/addons for local assets ...
	I0805 22:50:13.782495   18059 filesync.go:126] Scanning /home/jenkins/minikube-integration/19373-9606/.minikube/files for local assets ...
	I0805 22:50:13.782517   18059 start.go:296] duration metric: took 89.945033ms for postStartSetup
	I0805 22:50:13.782547   18059 main.go:141] libmachine: (addons-435364) Calling .GetConfigRaw
	I0805 22:50:13.783152   18059 main.go:141] libmachine: (addons-435364) Calling .GetIP
	I0805 22:50:13.785627   18059 main.go:141] libmachine: (addons-435364) DBG | domain addons-435364 has defined MAC address 52:54:00:99:11:e1 in network mk-addons-435364
	I0805 22:50:13.786019   18059 main.go:141] libmachine: (addons-435364) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:99:11:e1", ip: ""} in network mk-addons-435364: {Iface:virbr1 ExpiryTime:2024-08-05 23:50:03 +0000 UTC Type:0 Mac:52:54:00:99:11:e1 Iaid: IPaddr:192.168.39.129 Prefix:24 Hostname:addons-435364 Clientid:01:52:54:00:99:11:e1}
	I0805 22:50:13.786044   18059 main.go:141] libmachine: (addons-435364) DBG | domain addons-435364 has defined IP address 192.168.39.129 and MAC address 52:54:00:99:11:e1 in network mk-addons-435364
	I0805 22:50:13.786223   18059 profile.go:143] Saving config to /home/jenkins/minikube-integration/19373-9606/.minikube/profiles/addons-435364/config.json ...
	I0805 22:50:13.786402   18059 start.go:128] duration metric: took 25.213374021s to createHost
	I0805 22:50:13.786423   18059 main.go:141] libmachine: (addons-435364) Calling .GetSSHHostname
	I0805 22:50:13.788655   18059 main.go:141] libmachine: (addons-435364) DBG | domain addons-435364 has defined MAC address 52:54:00:99:11:e1 in network mk-addons-435364
	I0805 22:50:13.788971   18059 main.go:141] libmachine: (addons-435364) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:99:11:e1", ip: ""} in network mk-addons-435364: {Iface:virbr1 ExpiryTime:2024-08-05 23:50:03 +0000 UTC Type:0 Mac:52:54:00:99:11:e1 Iaid: IPaddr:192.168.39.129 Prefix:24 Hostname:addons-435364 Clientid:01:52:54:00:99:11:e1}
	I0805 22:50:13.788997   18059 main.go:141] libmachine: (addons-435364) DBG | domain addons-435364 has defined IP address 192.168.39.129 and MAC address 52:54:00:99:11:e1 in network mk-addons-435364
	I0805 22:50:13.789136   18059 main.go:141] libmachine: (addons-435364) Calling .GetSSHPort
	I0805 22:50:13.789313   18059 main.go:141] libmachine: (addons-435364) Calling .GetSSHKeyPath
	I0805 22:50:13.789473   18059 main.go:141] libmachine: (addons-435364) Calling .GetSSHKeyPath
	I0805 22:50:13.789591   18059 main.go:141] libmachine: (addons-435364) Calling .GetSSHUsername
	I0805 22:50:13.789755   18059 main.go:141] libmachine: Using SSH client type: native
	I0805 22:50:13.789961   18059 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.129 22 <nil> <nil>}
	I0805 22:50:13.789974   18059 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0805 22:50:13.895900   18059 main.go:141] libmachine: SSH cmd err, output: <nil>: 1722898213.874725931
	
	I0805 22:50:13.895920   18059 fix.go:216] guest clock: 1722898213.874725931
	I0805 22:50:13.895927   18059 fix.go:229] Guest: 2024-08-05 22:50:13.874725931 +0000 UTC Remote: 2024-08-05 22:50:13.78641307 +0000 UTC m=+25.311597235 (delta=88.312861ms)
	I0805 22:50:13.895975   18059 fix.go:200] guest clock delta is within tolerance: 88.312861ms
	I0805 22:50:13.895982   18059 start.go:83] releasing machines lock for "addons-435364", held for 25.32306573s
	I0805 22:50:13.896004   18059 main.go:141] libmachine: (addons-435364) Calling .DriverName
	I0805 22:50:13.896238   18059 main.go:141] libmachine: (addons-435364) Calling .GetIP
	I0805 22:50:13.898897   18059 main.go:141] libmachine: (addons-435364) DBG | domain addons-435364 has defined MAC address 52:54:00:99:11:e1 in network mk-addons-435364
	I0805 22:50:13.899211   18059 main.go:141] libmachine: (addons-435364) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:99:11:e1", ip: ""} in network mk-addons-435364: {Iface:virbr1 ExpiryTime:2024-08-05 23:50:03 +0000 UTC Type:0 Mac:52:54:00:99:11:e1 Iaid: IPaddr:192.168.39.129 Prefix:24 Hostname:addons-435364 Clientid:01:52:54:00:99:11:e1}
	I0805 22:50:13.899238   18059 main.go:141] libmachine: (addons-435364) DBG | domain addons-435364 has defined IP address 192.168.39.129 and MAC address 52:54:00:99:11:e1 in network mk-addons-435364
	I0805 22:50:13.899363   18059 main.go:141] libmachine: (addons-435364) Calling .DriverName
	I0805 22:50:13.899813   18059 main.go:141] libmachine: (addons-435364) Calling .DriverName
	I0805 22:50:13.899988   18059 main.go:141] libmachine: (addons-435364) Calling .DriverName
	I0805 22:50:13.900078   18059 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0805 22:50:13.900118   18059 main.go:141] libmachine: (addons-435364) Calling .GetSSHHostname
	I0805 22:50:13.900248   18059 ssh_runner.go:195] Run: cat /version.json
	I0805 22:50:13.900275   18059 main.go:141] libmachine: (addons-435364) Calling .GetSSHHostname
	I0805 22:50:13.902706   18059 main.go:141] libmachine: (addons-435364) DBG | domain addons-435364 has defined MAC address 52:54:00:99:11:e1 in network mk-addons-435364
	I0805 22:50:13.902818   18059 main.go:141] libmachine: (addons-435364) DBG | domain addons-435364 has defined MAC address 52:54:00:99:11:e1 in network mk-addons-435364
	I0805 22:50:13.903061   18059 main.go:141] libmachine: (addons-435364) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:99:11:e1", ip: ""} in network mk-addons-435364: {Iface:virbr1 ExpiryTime:2024-08-05 23:50:03 +0000 UTC Type:0 Mac:52:54:00:99:11:e1 Iaid: IPaddr:192.168.39.129 Prefix:24 Hostname:addons-435364 Clientid:01:52:54:00:99:11:e1}
	I0805 22:50:13.903087   18059 main.go:141] libmachine: (addons-435364) DBG | domain addons-435364 has defined IP address 192.168.39.129 and MAC address 52:54:00:99:11:e1 in network mk-addons-435364
	I0805 22:50:13.903204   18059 main.go:141] libmachine: (addons-435364) Calling .GetSSHPort
	I0805 22:50:13.903339   18059 main.go:141] libmachine: (addons-435364) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:99:11:e1", ip: ""} in network mk-addons-435364: {Iface:virbr1 ExpiryTime:2024-08-05 23:50:03 +0000 UTC Type:0 Mac:52:54:00:99:11:e1 Iaid: IPaddr:192.168.39.129 Prefix:24 Hostname:addons-435364 Clientid:01:52:54:00:99:11:e1}
	I0805 22:50:13.903364   18059 main.go:141] libmachine: (addons-435364) DBG | domain addons-435364 has defined IP address 192.168.39.129 and MAC address 52:54:00:99:11:e1 in network mk-addons-435364
	I0805 22:50:13.903343   18059 main.go:141] libmachine: (addons-435364) Calling .GetSSHKeyPath
	I0805 22:50:13.903506   18059 main.go:141] libmachine: (addons-435364) Calling .GetSSHPort
	I0805 22:50:13.903513   18059 main.go:141] libmachine: (addons-435364) Calling .GetSSHUsername
	I0805 22:50:13.903655   18059 main.go:141] libmachine: (addons-435364) Calling .GetSSHKeyPath
	I0805 22:50:13.903683   18059 sshutil.go:53] new ssh client: &{IP:192.168.39.129 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19373-9606/.minikube/machines/addons-435364/id_rsa Username:docker}
	I0805 22:50:13.903766   18059 main.go:141] libmachine: (addons-435364) Calling .GetSSHUsername
	I0805 22:50:13.903892   18059 sshutil.go:53] new ssh client: &{IP:192.168.39.129 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19373-9606/.minikube/machines/addons-435364/id_rsa Username:docker}
	I0805 22:50:13.980141   18059 ssh_runner.go:195] Run: systemctl --version
	I0805 22:50:14.016703   18059 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0805 22:50:14.174612   18059 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0805 22:50:14.182545   18059 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0805 22:50:14.182608   18059 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0805 22:50:14.198797   18059 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0805 22:50:14.198828   18059 start.go:495] detecting cgroup driver to use...
	I0805 22:50:14.198900   18059 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0805 22:50:14.214476   18059 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0805 22:50:14.229311   18059 docker.go:217] disabling cri-docker service (if available) ...
	I0805 22:50:14.229375   18059 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0805 22:50:14.243770   18059 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0805 22:50:14.258687   18059 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0805 22:50:14.372937   18059 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0805 22:50:14.517046   18059 docker.go:233] disabling docker service ...
	I0805 22:50:14.517137   18059 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0805 22:50:14.531702   18059 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0805 22:50:14.545769   18059 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0805 22:50:14.680419   18059 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0805 22:50:14.804056   18059 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0805 22:50:14.818562   18059 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0805 22:50:14.837034   18059 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I0805 22:50:14.837097   18059 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0805 22:50:14.847638   18059 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0805 22:50:14.847695   18059 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0805 22:50:14.858409   18059 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0805 22:50:14.868814   18059 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0805 22:50:14.879613   18059 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0805 22:50:14.890285   18059 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0805 22:50:14.900822   18059 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0805 22:50:14.918278   18059 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0805 22:50:14.928807   18059 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0805 22:50:14.938917   18059 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0805 22:50:14.938983   18059 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0805 22:50:14.953935   18059 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0805 22:50:14.964287   18059 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0805 22:50:15.080272   18059 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0805 22:50:15.215676   18059 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0805 22:50:15.215769   18059 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0805 22:50:15.220884   18059 start.go:563] Will wait 60s for crictl version
	I0805 22:50:15.220959   18059 ssh_runner.go:195] Run: which crictl
	I0805 22:50:15.225092   18059 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0805 22:50:15.269151   18059 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0805 22:50:15.269266   18059 ssh_runner.go:195] Run: crio --version
	I0805 22:50:15.298048   18059 ssh_runner.go:195] Run: crio --version
	I0805 22:50:15.329359   18059 out.go:177] * Preparing Kubernetes v1.30.3 on CRI-O 1.29.1 ...
	I0805 22:50:15.330464   18059 main.go:141] libmachine: (addons-435364) Calling .GetIP
	I0805 22:50:15.333196   18059 main.go:141] libmachine: (addons-435364) DBG | domain addons-435364 has defined MAC address 52:54:00:99:11:e1 in network mk-addons-435364
	I0805 22:50:15.333615   18059 main.go:141] libmachine: (addons-435364) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:99:11:e1", ip: ""} in network mk-addons-435364: {Iface:virbr1 ExpiryTime:2024-08-05 23:50:03 +0000 UTC Type:0 Mac:52:54:00:99:11:e1 Iaid: IPaddr:192.168.39.129 Prefix:24 Hostname:addons-435364 Clientid:01:52:54:00:99:11:e1}
	I0805 22:50:15.333643   18059 main.go:141] libmachine: (addons-435364) DBG | domain addons-435364 has defined IP address 192.168.39.129 and MAC address 52:54:00:99:11:e1 in network mk-addons-435364
	I0805 22:50:15.333872   18059 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0805 22:50:15.338208   18059 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0805 22:50:15.351724   18059 kubeadm.go:883] updating cluster {Name:addons-435364 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19339/minikube-v1.33.1-1722248113-19339-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:4000 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.
3 ClusterName:addons-435364 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.129 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountT
ype:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0805 22:50:15.351836   18059 preload.go:131] Checking if preload exists for k8s version v1.30.3 and runtime crio
	I0805 22:50:15.351876   18059 ssh_runner.go:195] Run: sudo crictl images --output json
	I0805 22:50:15.385565   18059 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.30.3". assuming images are not preloaded.
	I0805 22:50:15.385622   18059 ssh_runner.go:195] Run: which lz4
	I0805 22:50:15.389587   18059 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I0805 22:50:15.393856   18059 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0805 22:50:15.393885   18059 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19373-9606/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (406200976 bytes)
	I0805 22:50:16.732385   18059 crio.go:462] duration metric: took 1.34282579s to copy over tarball
	I0805 22:50:16.732456   18059 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0805 22:50:19.022271   18059 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.289790303s)
	I0805 22:50:19.022296   18059 crio.go:469] duration metric: took 2.289881495s to extract the tarball
	I0805 22:50:19.022303   18059 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0805 22:50:19.061425   18059 ssh_runner.go:195] Run: sudo crictl images --output json
	I0805 22:50:19.102433   18059 crio.go:514] all images are preloaded for cri-o runtime.
	I0805 22:50:19.102456   18059 cache_images.go:84] Images are preloaded, skipping loading
	I0805 22:50:19.102466   18059 kubeadm.go:934] updating node { 192.168.39.129 8443 v1.30.3 crio true true} ...
	I0805 22:50:19.102557   18059 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.30.3/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=addons-435364 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.129
	
	[Install]
	 config:
	{KubernetesVersion:v1.30.3 ClusterName:addons-435364 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0805 22:50:19.102623   18059 ssh_runner.go:195] Run: crio config
	I0805 22:50:19.155632   18059 cni.go:84] Creating CNI manager for ""
	I0805 22:50:19.155652   18059 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0805 22:50:19.155662   18059 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0805 22:50:19.155683   18059 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.129 APIServerPort:8443 KubernetesVersion:v1.30.3 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:addons-435364 NodeName:addons-435364 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.129"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.129 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/k
ubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0805 22:50:19.155811   18059 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.129
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "addons-435364"
	  kubeletExtraArgs:
	    node-ip: 192.168.39.129
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.129"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.30.3
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0805 22:50:19.155874   18059 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.30.3
	I0805 22:50:19.165888   18059 binaries.go:44] Found k8s binaries, skipping transfer
	I0805 22:50:19.165963   18059 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0805 22:50:19.175277   18059 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (313 bytes)
	I0805 22:50:19.192493   18059 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0805 22:50:19.208928   18059 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2157 bytes)
	I0805 22:50:19.225460   18059 ssh_runner.go:195] Run: grep 192.168.39.129	control-plane.minikube.internal$ /etc/hosts
	I0805 22:50:19.229318   18059 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.129	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0805 22:50:19.241095   18059 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0805 22:50:19.362185   18059 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0805 22:50:19.379532   18059 certs.go:68] Setting up /home/jenkins/minikube-integration/19373-9606/.minikube/profiles/addons-435364 for IP: 192.168.39.129
	I0805 22:50:19.379556   18059 certs.go:194] generating shared ca certs ...
	I0805 22:50:19.379577   18059 certs.go:226] acquiring lock for ca certs: {Name:mkf35a042c1656d191f542eee7fa087aad4d29d8 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0805 22:50:19.379723   18059 certs.go:240] generating "minikubeCA" ca cert: /home/jenkins/minikube-integration/19373-9606/.minikube/ca.key
	I0805 22:50:19.477775   18059 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19373-9606/.minikube/ca.crt ...
	I0805 22:50:19.477804   18059 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19373-9606/.minikube/ca.crt: {Name:mk5a02f51dff7ee2438dcf787168bbc744fdc790 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0805 22:50:19.477977   18059 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19373-9606/.minikube/ca.key ...
	I0805 22:50:19.477991   18059 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19373-9606/.minikube/ca.key: {Name:mkfd2741899892a506c886eae840074b2142988d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0805 22:50:19.478087   18059 certs.go:240] generating "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19373-9606/.minikube/proxy-client-ca.key
	I0805 22:50:19.567461   18059 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19373-9606/.minikube/proxy-client-ca.crt ...
	I0805 22:50:19.567489   18059 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19373-9606/.minikube/proxy-client-ca.crt: {Name:mk5879e0ceae46d834ba04a385271f59c818cb7f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0805 22:50:19.567659   18059 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19373-9606/.minikube/proxy-client-ca.key ...
	I0805 22:50:19.567673   18059 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19373-9606/.minikube/proxy-client-ca.key: {Name:mk49a52ce17f5d704f71d551b9fec2c09707cba4 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0805 22:50:19.567765   18059 certs.go:256] generating profile certs ...
	I0805 22:50:19.567837   18059 certs.go:363] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/19373-9606/.minikube/profiles/addons-435364/client.key
	I0805 22:50:19.567855   18059 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19373-9606/.minikube/profiles/addons-435364/client.crt with IP's: []
	I0805 22:50:19.735776   18059 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19373-9606/.minikube/profiles/addons-435364/client.crt ...
	I0805 22:50:19.735828   18059 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19373-9606/.minikube/profiles/addons-435364/client.crt: {Name:mka04aa8f5aebff03fdcb9f309b7f635eb1fd742 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0805 22:50:19.736004   18059 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19373-9606/.minikube/profiles/addons-435364/client.key ...
	I0805 22:50:19.736018   18059 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19373-9606/.minikube/profiles/addons-435364/client.key: {Name:mkc50683ce65cc98818fb6ea611c4e350f4aa4ba Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0805 22:50:19.736115   18059 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/19373-9606/.minikube/profiles/addons-435364/apiserver.key.4a22a0e7
	I0805 22:50:19.736134   18059 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19373-9606/.minikube/profiles/addons-435364/apiserver.crt.4a22a0e7 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.39.129]
	I0805 22:50:19.914172   18059 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19373-9606/.minikube/profiles/addons-435364/apiserver.crt.4a22a0e7 ...
	I0805 22:50:19.914202   18059 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19373-9606/.minikube/profiles/addons-435364/apiserver.crt.4a22a0e7: {Name:mkdea0de3134b785bd45cea7b22b0f2fba2ef2b4 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0805 22:50:19.914375   18059 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19373-9606/.minikube/profiles/addons-435364/apiserver.key.4a22a0e7 ...
	I0805 22:50:19.914391   18059 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19373-9606/.minikube/profiles/addons-435364/apiserver.key.4a22a0e7: {Name:mkc82a40128b9bfeccfb6506850f6c0fbad6215f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0805 22:50:19.914487   18059 certs.go:381] copying /home/jenkins/minikube-integration/19373-9606/.minikube/profiles/addons-435364/apiserver.crt.4a22a0e7 -> /home/jenkins/minikube-integration/19373-9606/.minikube/profiles/addons-435364/apiserver.crt
	I0805 22:50:19.914586   18059 certs.go:385] copying /home/jenkins/minikube-integration/19373-9606/.minikube/profiles/addons-435364/apiserver.key.4a22a0e7 -> /home/jenkins/minikube-integration/19373-9606/.minikube/profiles/addons-435364/apiserver.key
	I0805 22:50:19.914637   18059 certs.go:363] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/19373-9606/.minikube/profiles/addons-435364/proxy-client.key
	I0805 22:50:19.914662   18059 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19373-9606/.minikube/profiles/addons-435364/proxy-client.crt with IP's: []
	I0805 22:50:20.035665   18059 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19373-9606/.minikube/profiles/addons-435364/proxy-client.crt ...
	I0805 22:50:20.035694   18059 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19373-9606/.minikube/profiles/addons-435364/proxy-client.crt: {Name:mk057d4c8848c939368271362943917ddd178d9e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0805 22:50:20.035870   18059 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19373-9606/.minikube/profiles/addons-435364/proxy-client.key ...
	I0805 22:50:20.035883   18059 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19373-9606/.minikube/profiles/addons-435364/proxy-client.key: {Name:mkfd1b82b140a28dc229a1ce2c7e53ec16a877a1 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0805 22:50:20.036080   18059 certs.go:484] found cert: /home/jenkins/minikube-integration/19373-9606/.minikube/certs/ca-key.pem (1679 bytes)
	I0805 22:50:20.036113   18059 certs.go:484] found cert: /home/jenkins/minikube-integration/19373-9606/.minikube/certs/ca.pem (1082 bytes)
	I0805 22:50:20.036136   18059 certs.go:484] found cert: /home/jenkins/minikube-integration/19373-9606/.minikube/certs/cert.pem (1123 bytes)
	I0805 22:50:20.036158   18059 certs.go:484] found cert: /home/jenkins/minikube-integration/19373-9606/.minikube/certs/key.pem (1679 bytes)
	I0805 22:50:20.036726   18059 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19373-9606/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0805 22:50:20.066052   18059 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19373-9606/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0805 22:50:20.094029   18059 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19373-9606/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0805 22:50:20.118884   18059 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19373-9606/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0805 22:50:20.146757   18059 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19373-9606/.minikube/profiles/addons-435364/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1419 bytes)
	I0805 22:50:20.173214   18059 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19373-9606/.minikube/profiles/addons-435364/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0805 22:50:20.198268   18059 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19373-9606/.minikube/profiles/addons-435364/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0805 22:50:20.223942   18059 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19373-9606/.minikube/profiles/addons-435364/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0805 22:50:20.247802   18059 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19373-9606/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0805 22:50:20.276375   18059 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0805 22:50:20.293288   18059 ssh_runner.go:195] Run: openssl version
	I0805 22:50:20.299618   18059 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0805 22:50:20.311271   18059 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0805 22:50:20.315868   18059 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Aug  5 22:50 /usr/share/ca-certificates/minikubeCA.pem
	I0805 22:50:20.315920   18059 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0805 22:50:20.321840   18059 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0805 22:50:20.332834   18059 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0805 22:50:20.337260   18059 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0805 22:50:20.337302   18059 kubeadm.go:392] StartCluster: {Name:addons-435364 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19339/minikube-v1.33.1-1722248113-19339-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:4000 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.3 C
lusterName:addons-435364 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.129 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType
:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0805 22:50:20.337366   18059 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0805 22:50:20.337405   18059 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0805 22:50:20.375953   18059 cri.go:89] found id: ""
	I0805 22:50:20.376022   18059 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0805 22:50:20.386995   18059 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0805 22:50:20.397281   18059 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0805 22:50:20.407493   18059 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0805 22:50:20.407510   18059 kubeadm.go:157] found existing configuration files:
	
	I0805 22:50:20.407547   18059 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0805 22:50:20.416692   18059 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0805 22:50:20.416748   18059 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0805 22:50:20.426696   18059 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0805 22:50:20.436221   18059 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0805 22:50:20.436275   18059 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0805 22:50:20.446349   18059 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0805 22:50:20.455826   18059 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0805 22:50:20.455890   18059 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0805 22:50:20.466797   18059 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0805 22:50:20.476183   18059 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0805 22:50:20.476242   18059 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0805 22:50:20.485645   18059 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0805 22:50:20.677617   18059 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0805 22:50:30.354732   18059 kubeadm.go:310] [init] Using Kubernetes version: v1.30.3
	I0805 22:50:30.354813   18059 kubeadm.go:310] [preflight] Running pre-flight checks
	I0805 22:50:30.354918   18059 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0805 22:50:30.355046   18059 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0805 22:50:30.355205   18059 kubeadm.go:310] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0805 22:50:30.355323   18059 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0805 22:50:30.357294   18059 out.go:204]   - Generating certificates and keys ...
	I0805 22:50:30.357380   18059 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0805 22:50:30.357453   18059 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0805 22:50:30.357532   18059 kubeadm.go:310] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0805 22:50:30.357612   18059 kubeadm.go:310] [certs] Generating "front-proxy-ca" certificate and key
	I0805 22:50:30.357689   18059 kubeadm.go:310] [certs] Generating "front-proxy-client" certificate and key
	I0805 22:50:30.357732   18059 kubeadm.go:310] [certs] Generating "etcd/ca" certificate and key
	I0805 22:50:30.357778   18059 kubeadm.go:310] [certs] Generating "etcd/server" certificate and key
	I0805 22:50:30.357902   18059 kubeadm.go:310] [certs] etcd/server serving cert is signed for DNS names [addons-435364 localhost] and IPs [192.168.39.129 127.0.0.1 ::1]
	I0805 22:50:30.357996   18059 kubeadm.go:310] [certs] Generating "etcd/peer" certificate and key
	I0805 22:50:30.358128   18059 kubeadm.go:310] [certs] etcd/peer serving cert is signed for DNS names [addons-435364 localhost] and IPs [192.168.39.129 127.0.0.1 ::1]
	I0805 22:50:30.358218   18059 kubeadm.go:310] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0805 22:50:30.358324   18059 kubeadm.go:310] [certs] Generating "apiserver-etcd-client" certificate and key
	I0805 22:50:30.358394   18059 kubeadm.go:310] [certs] Generating "sa" key and public key
	I0805 22:50:30.358477   18059 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0805 22:50:30.358551   18059 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0805 22:50:30.358605   18059 kubeadm.go:310] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0805 22:50:30.358653   18059 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0805 22:50:30.358711   18059 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0805 22:50:30.358757   18059 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0805 22:50:30.358831   18059 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0805 22:50:30.358892   18059 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0805 22:50:30.360250   18059 out.go:204]   - Booting up control plane ...
	I0805 22:50:30.360329   18059 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0805 22:50:30.360391   18059 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0805 22:50:30.360449   18059 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0805 22:50:30.360539   18059 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0805 22:50:30.360625   18059 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0805 22:50:30.360675   18059 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0805 22:50:30.360779   18059 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I0805 22:50:30.360848   18059 kubeadm.go:310] [kubelet-check] Waiting for a healthy kubelet. This can take up to 4m0s
	I0805 22:50:30.360902   18059 kubeadm.go:310] [kubelet-check] The kubelet is healthy after 501.991644ms
	I0805 22:50:30.360963   18059 kubeadm.go:310] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I0805 22:50:30.361014   18059 kubeadm.go:310] [api-check] The API server is healthy after 5.001813165s
	I0805 22:50:30.361119   18059 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0805 22:50:30.361241   18059 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0805 22:50:30.361291   18059 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I0805 22:50:30.361476   18059 kubeadm.go:310] [mark-control-plane] Marking the node addons-435364 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0805 22:50:30.361541   18059 kubeadm.go:310] [bootstrap-token] Using token: 9pphx7.k9i7quxpukmqio93
	I0805 22:50:30.363130   18059 out.go:204]   - Configuring RBAC rules ...
	I0805 22:50:30.363250   18059 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0805 22:50:30.363323   18059 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0805 22:50:30.363440   18059 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0805 22:50:30.363558   18059 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0805 22:50:30.363704   18059 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0805 22:50:30.363854   18059 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0805 22:50:30.363977   18059 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0805 22:50:30.364016   18059 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I0805 22:50:30.364054   18059 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I0805 22:50:30.364060   18059 kubeadm.go:310] 
	I0805 22:50:30.364110   18059 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I0805 22:50:30.364117   18059 kubeadm.go:310] 
	I0805 22:50:30.364189   18059 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I0805 22:50:30.364195   18059 kubeadm.go:310] 
	I0805 22:50:30.364237   18059 kubeadm.go:310]   mkdir -p $HOME/.kube
	I0805 22:50:30.364294   18059 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0805 22:50:30.364339   18059 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0805 22:50:30.364343   18059 kubeadm.go:310] 
	I0805 22:50:30.364388   18059 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I0805 22:50:30.364394   18059 kubeadm.go:310] 
	I0805 22:50:30.364432   18059 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0805 22:50:30.364438   18059 kubeadm.go:310] 
	I0805 22:50:30.364518   18059 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I0805 22:50:30.364630   18059 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0805 22:50:30.364738   18059 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0805 22:50:30.364750   18059 kubeadm.go:310] 
	I0805 22:50:30.364856   18059 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I0805 22:50:30.364959   18059 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I0805 22:50:30.364967   18059 kubeadm.go:310] 
	I0805 22:50:30.365069   18059 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8443 --token 9pphx7.k9i7quxpukmqio93 \
	I0805 22:50:30.365191   18059 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:80c3f4a7caafd825f47d5f536053424d1d775e8da247cc5594b6b717e711fcd3 \
	I0805 22:50:30.365221   18059 kubeadm.go:310] 	--control-plane 
	I0805 22:50:30.365230   18059 kubeadm.go:310] 
	I0805 22:50:30.365326   18059 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I0805 22:50:30.365334   18059 kubeadm.go:310] 
	I0805 22:50:30.365433   18059 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8443 --token 9pphx7.k9i7quxpukmqio93 \
	I0805 22:50:30.365565   18059 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:80c3f4a7caafd825f47d5f536053424d1d775e8da247cc5594b6b717e711fcd3 
	I0805 22:50:30.365578   18059 cni.go:84] Creating CNI manager for ""
	I0805 22:50:30.365584   18059 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0805 22:50:30.366958   18059 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0805 22:50:30.368185   18059 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0805 22:50:30.379494   18059 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0805 22:50:30.401379   18059 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0805 22:50:30.401470   18059 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0805 22:50:30.401516   18059 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes addons-435364 minikube.k8s.io/updated_at=2024_08_05T22_50_30_0700 minikube.k8s.io/version=v1.33.1 minikube.k8s.io/commit=a179f531dd2dbe55e0d6074abcbc378280f91bb4 minikube.k8s.io/name=addons-435364 minikube.k8s.io/primary=true
	I0805 22:50:30.436537   18059 ops.go:34] apiserver oom_adj: -16
	I0805 22:50:30.523378   18059 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0805 22:50:31.023814   18059 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0805 22:50:31.524347   18059 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0805 22:50:32.023994   18059 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0805 22:50:32.523422   18059 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0805 22:50:33.024071   18059 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0805 22:50:33.523414   18059 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0805 22:50:34.024392   18059 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0805 22:50:34.524445   18059 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0805 22:50:35.024275   18059 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0805 22:50:35.524431   18059 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0805 22:50:36.023992   18059 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0805 22:50:36.523641   18059 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0805 22:50:37.023827   18059 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0805 22:50:37.523715   18059 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0805 22:50:38.024469   18059 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0805 22:50:38.524038   18059 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0805 22:50:39.024219   18059 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0805 22:50:39.523453   18059 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0805 22:50:40.023679   18059 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0805 22:50:40.523482   18059 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0805 22:50:41.023509   18059 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0805 22:50:41.524305   18059 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0805 22:50:42.023450   18059 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0805 22:50:42.523551   18059 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0805 22:50:43.024018   18059 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0805 22:50:43.524237   18059 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0805 22:50:43.676340   18059 kubeadm.go:1113] duration metric: took 13.274933703s to wait for elevateKubeSystemPrivileges
	I0805 22:50:43.676377   18059 kubeadm.go:394] duration metric: took 23.339077634s to StartCluster
	I0805 22:50:43.676396   18059 settings.go:142] acquiring lock: {Name:mkd43028f76794f43f4727efb0b77b9a49886053 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0805 22:50:43.676538   18059 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/19373-9606/kubeconfig
	I0805 22:50:43.676917   18059 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19373-9606/kubeconfig: {Name:mk4481c5dfe578449439dae4abf8681e1b7df535 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0805 22:50:43.677148   18059 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0805 22:50:43.677154   18059 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.39.129 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0805 22:50:43.677203   18059 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:true csi-hostpath-driver:true dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:true gvisor:false headlamp:false helm-tiller:true inaccel:false ingress:true ingress-dns:true inspektor-gadget:true istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:true nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:true registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:true volcano:true volumesnapshots:true yakd:true]
	I0805 22:50:43.677312   18059 addons.go:69] Setting yakd=true in profile "addons-435364"
	I0805 22:50:43.677321   18059 addons.go:69] Setting gcp-auth=true in profile "addons-435364"
	I0805 22:50:43.677319   18059 addons.go:69] Setting inspektor-gadget=true in profile "addons-435364"
	I0805 22:50:43.677342   18059 addons.go:234] Setting addon yakd=true in "addons-435364"
	I0805 22:50:43.677346   18059 mustload.go:65] Loading cluster: addons-435364
	I0805 22:50:43.677356   18059 addons.go:234] Setting addon inspektor-gadget=true in "addons-435364"
	I0805 22:50:43.677355   18059 config.go:182] Loaded profile config "addons-435364": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0805 22:50:43.677364   18059 addons.go:69] Setting ingress=true in profile "addons-435364"
	I0805 22:50:43.677380   18059 addons.go:69] Setting metrics-server=true in profile "addons-435364"
	I0805 22:50:43.677378   18059 addons.go:69] Setting ingress-dns=true in profile "addons-435364"
	I0805 22:50:43.677387   18059 host.go:66] Checking if "addons-435364" exists ...
	I0805 22:50:43.677396   18059 addons.go:234] Setting addon ingress=true in "addons-435364"
	I0805 22:50:43.677401   18059 addons.go:234] Setting addon metrics-server=true in "addons-435364"
	I0805 22:50:43.677401   18059 addons.go:69] Setting helm-tiller=true in profile "addons-435364"
	I0805 22:50:43.677403   18059 addons.go:234] Setting addon ingress-dns=true in "addons-435364"
	I0805 22:50:43.677418   18059 host.go:66] Checking if "addons-435364" exists ...
	I0805 22:50:43.677422   18059 addons.go:234] Setting addon helm-tiller=true in "addons-435364"
	I0805 22:50:43.677432   18059 host.go:66] Checking if "addons-435364" exists ...
	I0805 22:50:43.677435   18059 host.go:66] Checking if "addons-435364" exists ...
	I0805 22:50:43.677439   18059 host.go:66] Checking if "addons-435364" exists ...
	I0805 22:50:43.677569   18059 addons.go:69] Setting csi-hostpath-driver=true in profile "addons-435364"
	I0805 22:50:43.677680   18059 addons.go:234] Setting addon csi-hostpath-driver=true in "addons-435364"
	I0805 22:50:43.677708   18059 host.go:66] Checking if "addons-435364" exists ...
	I0805 22:50:43.677844   18059 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0805 22:50:43.677854   18059 addons.go:69] Setting nvidia-device-plugin=true in profile "addons-435364"
	I0805 22:50:43.677860   18059 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0805 22:50:43.677866   18059 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0805 22:50:43.677880   18059 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0805 22:50:43.677880   18059 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0805 22:50:43.677887   18059 addons.go:234] Setting addon nvidia-device-plugin=true in "addons-435364"
	I0805 22:50:43.677892   18059 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0805 22:50:43.677907   18059 host.go:66] Checking if "addons-435364" exists ...
	I0805 22:50:43.677374   18059 host.go:66] Checking if "addons-435364" exists ...
	I0805 22:50:43.678034   18059 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0805 22:50:43.678055   18059 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0805 22:50:43.678090   18059 config.go:182] Loaded profile config "addons-435364": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0805 22:50:43.678139   18059 addons.go:69] Setting storage-provisioner-rancher=true in profile "addons-435364"
	I0805 22:50:43.678171   18059 addons_storage_classes.go:33] enableOrDisableStorageClasses storage-provisioner-rancher=true on "addons-435364"
	I0805 22:50:43.678251   18059 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0805 22:50:43.678279   18059 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0805 22:50:43.678306   18059 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0805 22:50:43.678323   18059 addons.go:69] Setting default-storageclass=true in profile "addons-435364"
	I0805 22:50:43.678339   18059 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0805 22:50:43.678351   18059 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "addons-435364"
	I0805 22:50:43.678390   18059 addons.go:69] Setting volcano=true in profile "addons-435364"
	I0805 22:50:43.678396   18059 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0805 22:50:43.678414   18059 addons.go:234] Setting addon volcano=true in "addons-435364"
	I0805 22:50:43.678423   18059 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0805 22:50:43.678430   18059 addons.go:69] Setting volumesnapshots=true in profile "addons-435364"
	I0805 22:50:43.678454   18059 addons.go:234] Setting addon volumesnapshots=true in "addons-435364"
	I0805 22:50:43.677845   18059 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0805 22:50:43.678502   18059 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0805 22:50:43.678465   18059 addons.go:69] Setting registry=true in profile "addons-435364"
	I0805 22:50:43.678552   18059 addons.go:234] Setting addon registry=true in "addons-435364"
	I0805 22:50:43.678589   18059 host.go:66] Checking if "addons-435364" exists ...
	I0805 22:50:43.678683   18059 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0805 22:50:43.678706   18059 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0805 22:50:43.678468   18059 addons.go:69] Setting storage-provisioner=true in profile "addons-435364"
	I0805 22:50:43.678113   18059 addons.go:69] Setting cloud-spanner=true in profile "addons-435364"
	I0805 22:50:43.678746   18059 addons.go:234] Setting addon storage-provisioner=true in "addons-435364"
	I0805 22:50:43.678686   18059 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0805 22:50:43.678765   18059 addons.go:234] Setting addon cloud-spanner=true in "addons-435364"
	I0805 22:50:43.678771   18059 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0805 22:50:43.678792   18059 host.go:66] Checking if "addons-435364" exists ...
	I0805 22:50:43.678797   18059 host.go:66] Checking if "addons-435364" exists ...
	I0805 22:50:43.678798   18059 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0805 22:50:43.678933   18059 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0805 22:50:43.678958   18059 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0805 22:50:43.679087   18059 host.go:66] Checking if "addons-435364" exists ...
	I0805 22:50:43.679119   18059 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0805 22:50:43.679139   18059 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0805 22:50:43.679251   18059 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0805 22:50:43.680473   18059 host.go:66] Checking if "addons-435364" exists ...
	I0805 22:50:43.680979   18059 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0805 22:50:43.681049   18059 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0805 22:50:43.683769   18059 out.go:177] * Verifying Kubernetes components...
	I0805 22:50:43.685529   18059 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0805 22:50:43.699503   18059 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42103
	I0805 22:50:43.699912   18059 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38453
	I0805 22:50:43.699936   18059 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34427
	I0805 22:50:43.700080   18059 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33697
	I0805 22:50:43.700080   18059 main.go:141] libmachine: () Calling .GetVersion
	I0805 22:50:43.700317   18059 main.go:141] libmachine: () Calling .GetVersion
	I0805 22:50:43.700390   18059 main.go:141] libmachine: () Calling .GetVersion
	I0805 22:50:43.700856   18059 main.go:141] libmachine: Using API Version  1
	I0805 22:50:43.700875   18059 main.go:141] libmachine: () Calling .SetConfigRaw
	I0805 22:50:43.700918   18059 main.go:141] libmachine: () Calling .GetVersion
	I0805 22:50:43.701029   18059 main.go:141] libmachine: Using API Version  1
	I0805 22:50:43.701042   18059 main.go:141] libmachine: () Calling .SetConfigRaw
	I0805 22:50:43.701180   18059 main.go:141] libmachine: Using API Version  1
	I0805 22:50:43.701203   18059 main.go:141] libmachine: () Calling .SetConfigRaw
	I0805 22:50:43.701382   18059 main.go:141] libmachine: () Calling .GetMachineName
	I0805 22:50:43.701464   18059 main.go:141] libmachine: Using API Version  1
	I0805 22:50:43.701485   18059 main.go:141] libmachine: () Calling .SetConfigRaw
	I0805 22:50:43.701553   18059 main.go:141] libmachine: () Calling .GetMachineName
	I0805 22:50:43.701858   18059 main.go:141] libmachine: () Calling .GetMachineName
	I0805 22:50:43.701875   18059 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0805 22:50:43.701913   18059 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0805 22:50:43.701926   18059 main.go:141] libmachine: () Calling .GetMachineName
	I0805 22:50:43.702223   18059 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0805 22:50:43.702250   18059 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0805 22:50:43.702309   18059 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40125
	I0805 22:50:43.702435   18059 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0805 22:50:43.702470   18059 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0805 22:50:43.702770   18059 main.go:141] libmachine: () Calling .GetVersion
	I0805 22:50:43.703296   18059 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0805 22:50:43.703327   18059 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0805 22:50:43.703461   18059 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0805 22:50:43.703498   18059 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0805 22:50:43.703610   18059 main.go:141] libmachine: Using API Version  1
	I0805 22:50:43.703629   18059 main.go:141] libmachine: () Calling .SetConfigRaw
	I0805 22:50:43.704311   18059 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0805 22:50:43.704335   18059 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0805 22:50:43.707863   18059 main.go:141] libmachine: () Calling .GetMachineName
	I0805 22:50:43.708526   18059 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0805 22:50:43.708566   18059 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0805 22:50:43.728256   18059 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38819
	I0805 22:50:43.729118   18059 main.go:141] libmachine: () Calling .GetVersion
	I0805 22:50:43.729704   18059 main.go:141] libmachine: Using API Version  1
	I0805 22:50:43.729728   18059 main.go:141] libmachine: () Calling .SetConfigRaw
	I0805 22:50:43.730068   18059 main.go:141] libmachine: () Calling .GetMachineName
	I0805 22:50:43.730669   18059 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0805 22:50:43.730708   18059 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0805 22:50:43.737318   18059 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38277
	I0805 22:50:43.737831   18059 main.go:141] libmachine: () Calling .GetVersion
	I0805 22:50:43.738548   18059 main.go:141] libmachine: Using API Version  1
	I0805 22:50:43.738565   18059 main.go:141] libmachine: () Calling .SetConfigRaw
	I0805 22:50:43.738949   18059 main.go:141] libmachine: () Calling .GetMachineName
	I0805 22:50:43.739597   18059 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0805 22:50:43.739635   18059 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0805 22:50:43.739825   18059 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40117
	I0805 22:50:43.741556   18059 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39621
	I0805 22:50:43.741990   18059 main.go:141] libmachine: () Calling .GetVersion
	I0805 22:50:43.742082   18059 main.go:141] libmachine: () Calling .GetVersion
	I0805 22:50:43.742613   18059 main.go:141] libmachine: Using API Version  1
	I0805 22:50:43.742628   18059 main.go:141] libmachine: () Calling .SetConfigRaw
	I0805 22:50:43.742744   18059 main.go:141] libmachine: Using API Version  1
	I0805 22:50:43.742755   18059 main.go:141] libmachine: () Calling .SetConfigRaw
	I0805 22:50:43.743119   18059 main.go:141] libmachine: () Calling .GetMachineName
	I0805 22:50:43.743700   18059 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0805 22:50:43.743734   18059 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0805 22:50:43.743934   18059 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40507
	I0805 22:50:43.744384   18059 main.go:141] libmachine: () Calling .GetVersion
	I0805 22:50:43.744899   18059 main.go:141] libmachine: Using API Version  1
	I0805 22:50:43.744914   18059 main.go:141] libmachine: () Calling .SetConfigRaw
	I0805 22:50:43.744974   18059 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40911
	I0805 22:50:43.745251   18059 main.go:141] libmachine: () Calling .GetMachineName
	I0805 22:50:43.745380   18059 main.go:141] libmachine: () Calling .GetMachineName
	I0805 22:50:43.745518   18059 main.go:141] libmachine: (addons-435364) Calling .GetState
	I0805 22:50:43.745628   18059 main.go:141] libmachine: () Calling .GetVersion
	I0805 22:50:43.746213   18059 main.go:141] libmachine: (addons-435364) Calling .GetState
	I0805 22:50:43.746278   18059 main.go:141] libmachine: Using API Version  1
	I0805 22:50:43.746296   18059 main.go:141] libmachine: () Calling .SetConfigRaw
	I0805 22:50:43.746916   18059 main.go:141] libmachine: () Calling .GetMachineName
	I0805 22:50:43.747960   18059 main.go:141] libmachine: (addons-435364) Calling .GetState
	I0805 22:50:43.749964   18059 addons.go:234] Setting addon default-storageclass=true in "addons-435364"
	I0805 22:50:43.750004   18059 host.go:66] Checking if "addons-435364" exists ...
	I0805 22:50:43.750393   18059 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0805 22:50:43.750433   18059 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0805 22:50:43.750641   18059 main.go:141] libmachine: (addons-435364) Calling .DriverName
	I0805 22:50:43.750658   18059 main.go:141] libmachine: (addons-435364) Calling .DriverName
	I0805 22:50:43.752882   18059 out.go:177]   - Using image docker.io/marcnuri/yakd:0.0.5
	I0805 22:50:43.752882   18059 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-node-driver-registrar:v2.6.0
	I0805 22:50:43.753393   18059 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37129
	I0805 22:50:43.753425   18059 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36175
	I0805 22:50:43.753848   18059 main.go:141] libmachine: () Calling .GetVersion
	I0805 22:50:43.754245   18059 addons.go:431] installing /etc/kubernetes/addons/yakd-ns.yaml
	I0805 22:50:43.754262   18059 ssh_runner.go:362] scp yakd/yakd-ns.yaml --> /etc/kubernetes/addons/yakd-ns.yaml (171 bytes)
	I0805 22:50:43.754280   18059 main.go:141] libmachine: (addons-435364) Calling .GetSSHHostname
	I0805 22:50:43.754365   18059 main.go:141] libmachine: Using API Version  1
	I0805 22:50:43.754380   18059 main.go:141] libmachine: () Calling .SetConfigRaw
	I0805 22:50:43.754673   18059 main.go:141] libmachine: () Calling .GetMachineName
	I0805 22:50:43.754823   18059 main.go:141] libmachine: (addons-435364) Calling .GetState
	I0805 22:50:43.755725   18059 out.go:177]   - Using image registry.k8s.io/sig-storage/hostpathplugin:v1.9.0
	I0805 22:50:43.756766   18059 main.go:141] libmachine: (addons-435364) Calling .DriverName
	I0805 22:50:43.757915   18059 main.go:141] libmachine: (addons-435364) DBG | domain addons-435364 has defined MAC address 52:54:00:99:11:e1 in network mk-addons-435364
	I0805 22:50:43.758280   18059 main.go:141] libmachine: (addons-435364) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:99:11:e1", ip: ""} in network mk-addons-435364: {Iface:virbr1 ExpiryTime:2024-08-05 23:50:03 +0000 UTC Type:0 Mac:52:54:00:99:11:e1 Iaid: IPaddr:192.168.39.129 Prefix:24 Hostname:addons-435364 Clientid:01:52:54:00:99:11:e1}
	I0805 22:50:43.758309   18059 main.go:141] libmachine: (addons-435364) DBG | domain addons-435364 has defined IP address 192.168.39.129 and MAC address 52:54:00:99:11:e1 in network mk-addons-435364
	I0805 22:50:43.758400   18059 out.go:177]   - Using image ghcr.io/inspektor-gadget/inspektor-gadget:v0.30.0
	I0805 22:50:43.758423   18059 main.go:141] libmachine: (addons-435364) Calling .GetSSHPort
	I0805 22:50:43.758460   18059 out.go:177]   - Using image registry.k8s.io/sig-storage/livenessprobe:v2.8.0
	I0805 22:50:43.758499   18059 main.go:141] libmachine: () Calling .GetVersion
	I0805 22:50:43.758631   18059 main.go:141] libmachine: (addons-435364) Calling .GetSSHKeyPath
	I0805 22:50:43.758836   18059 main.go:141] libmachine: (addons-435364) Calling .GetSSHUsername
	I0805 22:50:43.758988   18059 sshutil.go:53] new ssh client: &{IP:192.168.39.129 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19373-9606/.minikube/machines/addons-435364/id_rsa Username:docker}
	I0805 22:50:43.759381   18059 main.go:141] libmachine: Using API Version  1
	I0805 22:50:43.759405   18059 main.go:141] libmachine: () Calling .SetConfigRaw
	I0805 22:50:43.760393   18059 addons.go:431] installing /etc/kubernetes/addons/ig-namespace.yaml
	I0805 22:50:43.760410   18059 ssh_runner.go:362] scp inspektor-gadget/ig-namespace.yaml --> /etc/kubernetes/addons/ig-namespace.yaml (55 bytes)
	I0805 22:50:43.760428   18059 main.go:141] libmachine: (addons-435364) Calling .GetSSHHostname
	I0805 22:50:43.761061   18059 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40707
	I0805 22:50:43.761385   18059 main.go:141] libmachine: () Calling .GetMachineName
	I0805 22:50:43.761519   18059 main.go:141] libmachine: (addons-435364) Calling .GetState
	I0805 22:50:43.761809   18059 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-resizer:v1.6.0
	I0805 22:50:43.763417   18059 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-snapshotter:v6.1.0
	I0805 22:50:43.763418   18059 main.go:141] libmachine: (addons-435364) DBG | domain addons-435364 has defined MAC address 52:54:00:99:11:e1 in network mk-addons-435364
	I0805 22:50:43.764007   18059 main.go:141] libmachine: (addons-435364) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:99:11:e1", ip: ""} in network mk-addons-435364: {Iface:virbr1 ExpiryTime:2024-08-05 23:50:03 +0000 UTC Type:0 Mac:52:54:00:99:11:e1 Iaid: IPaddr:192.168.39.129 Prefix:24 Hostname:addons-435364 Clientid:01:52:54:00:99:11:e1}
	I0805 22:50:43.764031   18059 main.go:141] libmachine: (addons-435364) DBG | domain addons-435364 has defined IP address 192.168.39.129 and MAC address 52:54:00:99:11:e1 in network mk-addons-435364
	I0805 22:50:43.764168   18059 main.go:141] libmachine: (addons-435364) Calling .GetSSHPort
	I0805 22:50:43.764369   18059 main.go:141] libmachine: (addons-435364) Calling .GetSSHKeyPath
	I0805 22:50:43.764486   18059 main.go:141] libmachine: (addons-435364) Calling .GetSSHUsername
	I0805 22:50:43.764595   18059 sshutil.go:53] new ssh client: &{IP:192.168.39.129 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19373-9606/.minikube/machines/addons-435364/id_rsa Username:docker}
	I0805 22:50:43.764890   18059 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36667
	I0805 22:50:43.765110   18059 main.go:141] libmachine: (addons-435364) Calling .DriverName
	I0805 22:50:43.765187   18059 main.go:141] libmachine: () Calling .GetVersion
	I0805 22:50:43.765608   18059 main.go:141] libmachine: Using API Version  1
	I0805 22:50:43.765626   18059 main.go:141] libmachine: () Calling .SetConfigRaw
	I0805 22:50:43.765484   18059 main.go:141] libmachine: () Calling .GetVersion
	I0805 22:50:43.765926   18059 main.go:141] libmachine: () Calling .GetMachineName
	I0805 22:50:43.766063   18059 main.go:141] libmachine: Using API Version  1
	I0805 22:50:43.766080   18059 main.go:141] libmachine: () Calling .SetConfigRaw
	I0805 22:50:43.766110   18059 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-provisioner:v3.3.0
	I0805 22:50:43.766606   18059 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0805 22:50:43.766995   18059 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0805 22:50:43.766634   18059 main.go:141] libmachine: () Calling .GetMachineName
	I0805 22:50:43.767290   18059 out.go:177]   - Using image ghcr.io/helm/tiller:v2.17.0
	I0805 22:50:43.767801   18059 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40035
	I0805 22:50:43.767645   18059 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0805 22:50:43.767851   18059 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0805 22:50:43.768120   18059 main.go:141] libmachine: () Calling .GetVersion
	I0805 22:50:43.768556   18059 main.go:141] libmachine: Using API Version  1
	I0805 22:50:43.768735   18059 main.go:141] libmachine: () Calling .SetConfigRaw
	I0805 22:50:43.769120   18059 main.go:141] libmachine: () Calling .GetMachineName
	I0805 22:50:43.769349   18059 addons.go:431] installing /etc/kubernetes/addons/helm-tiller-dp.yaml
	I0805 22:50:43.769366   18059 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/helm-tiller-dp.yaml (2422 bytes)
	I0805 22:50:43.769383   18059 main.go:141] libmachine: (addons-435364) Calling .GetSSHHostname
	I0805 22:50:43.769661   18059 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0805 22:50:43.769696   18059 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0805 22:50:43.770097   18059 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-attacher:v4.0.0
	I0805 22:50:43.771676   18059 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-external-health-monitor-controller:v0.7.0
	I0805 22:50:43.772932   18059 addons.go:431] installing /etc/kubernetes/addons/rbac-external-attacher.yaml
	I0805 22:50:43.772949   18059 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-attacher.yaml --> /etc/kubernetes/addons/rbac-external-attacher.yaml (3073 bytes)
	I0805 22:50:43.772977   18059 main.go:141] libmachine: (addons-435364) Calling .GetSSHHostname
	I0805 22:50:43.772998   18059 main.go:141] libmachine: (addons-435364) DBG | domain addons-435364 has defined MAC address 52:54:00:99:11:e1 in network mk-addons-435364
	I0805 22:50:43.773558   18059 main.go:141] libmachine: (addons-435364) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:99:11:e1", ip: ""} in network mk-addons-435364: {Iface:virbr1 ExpiryTime:2024-08-05 23:50:03 +0000 UTC Type:0 Mac:52:54:00:99:11:e1 Iaid: IPaddr:192.168.39.129 Prefix:24 Hostname:addons-435364 Clientid:01:52:54:00:99:11:e1}
	I0805 22:50:43.773580   18059 main.go:141] libmachine: (addons-435364) DBG | domain addons-435364 has defined IP address 192.168.39.129 and MAC address 52:54:00:99:11:e1 in network mk-addons-435364
	I0805 22:50:43.773749   18059 main.go:141] libmachine: (addons-435364) Calling .GetSSHPort
	I0805 22:50:43.774199   18059 main.go:141] libmachine: (addons-435364) Calling .GetSSHKeyPath
	I0805 22:50:43.774421   18059 main.go:141] libmachine: (addons-435364) Calling .GetSSHUsername
	I0805 22:50:43.774668   18059 sshutil.go:53] new ssh client: &{IP:192.168.39.129 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19373-9606/.minikube/machines/addons-435364/id_rsa Username:docker}
	I0805 22:50:43.776037   18059 main.go:141] libmachine: (addons-435364) DBG | domain addons-435364 has defined MAC address 52:54:00:99:11:e1 in network mk-addons-435364
	I0805 22:50:43.776424   18059 main.go:141] libmachine: (addons-435364) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:99:11:e1", ip: ""} in network mk-addons-435364: {Iface:virbr1 ExpiryTime:2024-08-05 23:50:03 +0000 UTC Type:0 Mac:52:54:00:99:11:e1 Iaid: IPaddr:192.168.39.129 Prefix:24 Hostname:addons-435364 Clientid:01:52:54:00:99:11:e1}
	I0805 22:50:43.776449   18059 main.go:141] libmachine: (addons-435364) DBG | domain addons-435364 has defined IP address 192.168.39.129 and MAC address 52:54:00:99:11:e1 in network mk-addons-435364
	I0805 22:50:43.776684   18059 main.go:141] libmachine: (addons-435364) Calling .GetSSHPort
	I0805 22:50:43.776889   18059 main.go:141] libmachine: (addons-435364) Calling .GetSSHKeyPath
	I0805 22:50:43.777077   18059 main.go:141] libmachine: (addons-435364) Calling .GetSSHUsername
	I0805 22:50:43.777276   18059 sshutil.go:53] new ssh client: &{IP:192.168.39.129 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19373-9606/.minikube/machines/addons-435364/id_rsa Username:docker}
	I0805 22:50:43.780267   18059 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39577
	I0805 22:50:43.780742   18059 main.go:141] libmachine: () Calling .GetVersion
	I0805 22:50:43.781148   18059 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45257
	I0805 22:50:43.781255   18059 main.go:141] libmachine: Using API Version  1
	I0805 22:50:43.781270   18059 main.go:141] libmachine: () Calling .SetConfigRaw
	I0805 22:50:43.781455   18059 main.go:141] libmachine: () Calling .GetVersion
	I0805 22:50:43.781544   18059 main.go:141] libmachine: () Calling .GetMachineName
	I0805 22:50:43.781985   18059 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0805 22:50:43.782011   18059 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0805 22:50:43.782186   18059 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45863
	I0805 22:50:43.782377   18059 main.go:141] libmachine: Using API Version  1
	I0805 22:50:43.782389   18059 main.go:141] libmachine: () Calling .SetConfigRaw
	I0805 22:50:43.782661   18059 main.go:141] libmachine: () Calling .GetVersion
	I0805 22:50:43.782844   18059 main.go:141] libmachine: () Calling .GetMachineName
	I0805 22:50:43.783217   18059 main.go:141] libmachine: Using API Version  1
	I0805 22:50:43.783232   18059 main.go:141] libmachine: () Calling .SetConfigRaw
	I0805 22:50:43.785914   18059 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33073
	I0805 22:50:43.786450   18059 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0805 22:50:43.786482   18059 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0805 22:50:43.786769   18059 main.go:141] libmachine: () Calling .GetMachineName
	I0805 22:50:43.787132   18059 main.go:141] libmachine: () Calling .GetVersion
	I0805 22:50:43.787466   18059 main.go:141] libmachine: (addons-435364) Calling .GetState
	I0805 22:50:43.787597   18059 main.go:141] libmachine: Using API Version  1
	I0805 22:50:43.787619   18059 main.go:141] libmachine: () Calling .SetConfigRaw
	I0805 22:50:43.787922   18059 main.go:141] libmachine: () Calling .GetMachineName
	I0805 22:50:43.788104   18059 main.go:141] libmachine: (addons-435364) Calling .GetState
	I0805 22:50:43.790373   18059 main.go:141] libmachine: (addons-435364) Calling .DriverName
	I0805 22:50:43.790979   18059 main.go:141] libmachine: (addons-435364) Calling .DriverName
	I0805 22:50:43.792963   18059 out.go:177]   - Using image gcr.io/k8s-minikube/minikube-ingress-dns:0.0.3
	I0805 22:50:43.793021   18059 out.go:177]   - Using image registry.k8s.io/metrics-server/metrics-server:v0.7.1
	I0805 22:50:43.793244   18059 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34269
	I0805 22:50:43.793375   18059 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36011
	I0805 22:50:43.793767   18059 main.go:141] libmachine: () Calling .GetVersion
	I0805 22:50:43.793809   18059 main.go:141] libmachine: () Calling .GetVersion
	I0805 22:50:43.793942   18059 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46027
	I0805 22:50:43.794257   18059 main.go:141] libmachine: Using API Version  1
	I0805 22:50:43.794274   18059 main.go:141] libmachine: () Calling .SetConfigRaw
	I0805 22:50:43.794336   18059 main.go:141] libmachine: () Calling .GetVersion
	I0805 22:50:43.794750   18059 main.go:141] libmachine: Using API Version  1
	I0805 22:50:43.794766   18059 main.go:141] libmachine: () Calling .SetConfigRaw
	I0805 22:50:43.794969   18059 main.go:141] libmachine: () Calling .GetMachineName
	I0805 22:50:43.795101   18059 main.go:141] libmachine: () Calling .GetMachineName
	I0805 22:50:43.795237   18059 addons.go:431] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0805 22:50:43.795252   18059 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0805 22:50:43.795265   18059 main.go:141] libmachine: (addons-435364) Calling .GetState
	I0805 22:50:43.795269   18059 main.go:141] libmachine: (addons-435364) Calling .GetSSHHostname
	I0805 22:50:43.795321   18059 main.go:141] libmachine: (addons-435364) Calling .GetState
	I0805 22:50:43.795419   18059 addons.go:431] installing /etc/kubernetes/addons/ingress-dns-pod.yaml
	I0805 22:50:43.795435   18059 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-dns-pod.yaml (2442 bytes)
	I0805 22:50:43.795451   18059 main.go:141] libmachine: (addons-435364) Calling .GetSSHHostname
	I0805 22:50:43.795958   18059 main.go:141] libmachine: Using API Version  1
	I0805 22:50:43.795978   18059 main.go:141] libmachine: () Calling .SetConfigRaw
	I0805 22:50:43.797016   18059 main.go:141] libmachine: () Calling .GetMachineName
	I0805 22:50:43.797635   18059 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0805 22:50:43.797674   18059 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0805 22:50:43.797884   18059 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42629
	I0805 22:50:43.797895   18059 main.go:141] libmachine: (addons-435364) Calling .DriverName
	I0805 22:50:43.798406   18059 main.go:141] libmachine: () Calling .GetVersion
	I0805 22:50:43.799592   18059 addons.go:234] Setting addon storage-provisioner-rancher=true in "addons-435364"
	I0805 22:50:43.799635   18059 host.go:66] Checking if "addons-435364" exists ...
	I0805 22:50:43.799797   18059 out.go:177]   - Using image registry.k8s.io/ingress-nginx/controller:v1.11.1
	I0805 22:50:43.800016   18059 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0805 22:50:43.800046   18059 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0805 22:50:43.800233   18059 main.go:141] libmachine: (addons-435364) DBG | domain addons-435364 has defined MAC address 52:54:00:99:11:e1 in network mk-addons-435364
	I0805 22:50:43.800344   18059 main.go:141] libmachine: Using API Version  1
	I0805 22:50:43.800355   18059 main.go:141] libmachine: () Calling .SetConfigRaw
	I0805 22:50:43.800753   18059 main.go:141] libmachine: () Calling .GetMachineName
	I0805 22:50:43.800815   18059 main.go:141] libmachine: (addons-435364) DBG | domain addons-435364 has defined MAC address 52:54:00:99:11:e1 in network mk-addons-435364
	I0805 22:50:43.800887   18059 main.go:141] libmachine: (addons-435364) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:99:11:e1", ip: ""} in network mk-addons-435364: {Iface:virbr1 ExpiryTime:2024-08-05 23:50:03 +0000 UTC Type:0 Mac:52:54:00:99:11:e1 Iaid: IPaddr:192.168.39.129 Prefix:24 Hostname:addons-435364 Clientid:01:52:54:00:99:11:e1}
	I0805 22:50:43.800904   18059 main.go:141] libmachine: (addons-435364) DBG | domain addons-435364 has defined IP address 192.168.39.129 and MAC address 52:54:00:99:11:e1 in network mk-addons-435364
	I0805 22:50:43.801073   18059 main.go:141] libmachine: (addons-435364) Calling .GetState
	I0805 22:50:43.801116   18059 main.go:141] libmachine: (addons-435364) Calling .GetSSHPort
	I0805 22:50:43.801323   18059 main.go:141] libmachine: (addons-435364) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:99:11:e1", ip: ""} in network mk-addons-435364: {Iface:virbr1 ExpiryTime:2024-08-05 23:50:03 +0000 UTC Type:0 Mac:52:54:00:99:11:e1 Iaid: IPaddr:192.168.39.129 Prefix:24 Hostname:addons-435364 Clientid:01:52:54:00:99:11:e1}
	I0805 22:50:43.801341   18059 main.go:141] libmachine: (addons-435364) DBG | domain addons-435364 has defined IP address 192.168.39.129 and MAC address 52:54:00:99:11:e1 in network mk-addons-435364
	I0805 22:50:43.801372   18059 main.go:141] libmachine: (addons-435364) Calling .GetSSHKeyPath
	I0805 22:50:43.801504   18059 main.go:141] libmachine: (addons-435364) Calling .GetSSHUsername
	I0805 22:50:43.801550   18059 main.go:141] libmachine: (addons-435364) Calling .GetSSHPort
	I0805 22:50:43.801705   18059 sshutil.go:53] new ssh client: &{IP:192.168.39.129 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19373-9606/.minikube/machines/addons-435364/id_rsa Username:docker}
	I0805 22:50:43.801906   18059 main.go:141] libmachine: (addons-435364) Calling .GetSSHKeyPath
	I0805 22:50:43.802057   18059 main.go:141] libmachine: (addons-435364) Calling .GetSSHUsername
	I0805 22:50:43.802238   18059 sshutil.go:53] new ssh client: &{IP:192.168.39.129 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19373-9606/.minikube/machines/addons-435364/id_rsa Username:docker}
	I0805 22:50:43.802565   18059 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.4.1
	I0805 22:50:43.803280   18059 main.go:141] libmachine: (addons-435364) Calling .DriverName
	I0805 22:50:43.806469   18059 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44719
	I0805 22:50:43.806938   18059 main.go:141] libmachine: () Calling .GetVersion
	I0805 22:50:43.807487   18059 main.go:141] libmachine: Using API Version  1
	I0805 22:50:43.807502   18059 main.go:141] libmachine: () Calling .SetConfigRaw
	I0805 22:50:43.807552   18059 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0805 22:50:43.807647   18059 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.4.1
	I0805 22:50:43.807910   18059 main.go:141] libmachine: () Calling .GetMachineName
	I0805 22:50:43.808169   18059 main.go:141] libmachine: (addons-435364) Calling .GetState
	I0805 22:50:43.809922   18059 addons.go:431] installing /etc/kubernetes/addons/ingress-deploy.yaml
	I0805 22:50:43.809941   18059 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-deploy.yaml (16078 bytes)
	I0805 22:50:43.809960   18059 main.go:141] libmachine: (addons-435364) Calling .GetSSHHostname
	I0805 22:50:43.810646   18059 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0805 22:50:43.810659   18059 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0805 22:50:43.810676   18059 main.go:141] libmachine: (addons-435364) Calling .GetSSHHostname
	I0805 22:50:43.811182   18059 main.go:141] libmachine: (addons-435364) Calling .DriverName
	I0805 22:50:43.813078   18059 out.go:177]   - Using image docker.io/registry:2.8.3
	I0805 22:50:43.814389   18059 main.go:141] libmachine: (addons-435364) DBG | domain addons-435364 has defined MAC address 52:54:00:99:11:e1 in network mk-addons-435364
	I0805 22:50:43.814414   18059 main.go:141] libmachine: (addons-435364) DBG | domain addons-435364 has defined MAC address 52:54:00:99:11:e1 in network mk-addons-435364
	I0805 22:50:43.814845   18059 main.go:141] libmachine: (addons-435364) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:99:11:e1", ip: ""} in network mk-addons-435364: {Iface:virbr1 ExpiryTime:2024-08-05 23:50:03 +0000 UTC Type:0 Mac:52:54:00:99:11:e1 Iaid: IPaddr:192.168.39.129 Prefix:24 Hostname:addons-435364 Clientid:01:52:54:00:99:11:e1}
	I0805 22:50:43.814865   18059 main.go:141] libmachine: (addons-435364) DBG | domain addons-435364 has defined IP address 192.168.39.129 and MAC address 52:54:00:99:11:e1 in network mk-addons-435364
	I0805 22:50:43.814896   18059 main.go:141] libmachine: (addons-435364) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:99:11:e1", ip: ""} in network mk-addons-435364: {Iface:virbr1 ExpiryTime:2024-08-05 23:50:03 +0000 UTC Type:0 Mac:52:54:00:99:11:e1 Iaid: IPaddr:192.168.39.129 Prefix:24 Hostname:addons-435364 Clientid:01:52:54:00:99:11:e1}
	I0805 22:50:43.814910   18059 main.go:141] libmachine: (addons-435364) DBG | domain addons-435364 has defined IP address 192.168.39.129 and MAC address 52:54:00:99:11:e1 in network mk-addons-435364
	I0805 22:50:43.815393   18059 main.go:141] libmachine: (addons-435364) Calling .GetSSHPort
	I0805 22:50:43.815463   18059 main.go:141] libmachine: (addons-435364) Calling .GetSSHPort
	I0805 22:50:43.815644   18059 main.go:141] libmachine: (addons-435364) Calling .GetSSHKeyPath
	I0805 22:50:43.815699   18059 main.go:141] libmachine: (addons-435364) Calling .GetSSHKeyPath
	I0805 22:50:43.815738   18059 main.go:141] libmachine: (addons-435364) Calling .GetSSHUsername
	I0805 22:50:43.815818   18059 sshutil.go:53] new ssh client: &{IP:192.168.39.129 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19373-9606/.minikube/machines/addons-435364/id_rsa Username:docker}
	I0805 22:50:43.816123   18059 main.go:141] libmachine: (addons-435364) Calling .GetSSHUsername
	I0805 22:50:43.816245   18059 sshutil.go:53] new ssh client: &{IP:192.168.39.129 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19373-9606/.minikube/machines/addons-435364/id_rsa Username:docker}
	I0805 22:50:43.816685   18059 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45191
	I0805 22:50:43.817236   18059 main.go:141] libmachine: () Calling .GetVersion
	I0805 22:50:43.817683   18059 out.go:177]   - Using image gcr.io/k8s-minikube/kube-registry-proxy:0.0.6
	I0805 22:50:43.817860   18059 main.go:141] libmachine: Using API Version  1
	I0805 22:50:43.817875   18059 main.go:141] libmachine: () Calling .SetConfigRaw
	I0805 22:50:43.818299   18059 main.go:141] libmachine: () Calling .GetMachineName
	I0805 22:50:43.818442   18059 main.go:141] libmachine: (addons-435364) Calling .GetState
	I0805 22:50:43.818839   18059 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38631
	I0805 22:50:43.819119   18059 addons.go:431] installing /etc/kubernetes/addons/registry-rc.yaml
	I0805 22:50:43.819139   18059 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-rc.yaml (860 bytes)
	I0805 22:50:43.819156   18059 main.go:141] libmachine: (addons-435364) Calling .GetSSHHostname
	I0805 22:50:43.819332   18059 main.go:141] libmachine: () Calling .GetVersion
	I0805 22:50:43.819932   18059 main.go:141] libmachine: Using API Version  1
	I0805 22:50:43.819948   18059 main.go:141] libmachine: () Calling .SetConfigRaw
	I0805 22:50:43.820304   18059 main.go:141] libmachine: () Calling .GetMachineName
	I0805 22:50:43.820500   18059 main.go:141] libmachine: (addons-435364) Calling .GetState
	I0805 22:50:43.821471   18059 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42483
	I0805 22:50:43.821859   18059 main.go:141] libmachine: () Calling .GetVersion
	I0805 22:50:43.822579   18059 main.go:141] libmachine: (addons-435364) Calling .DriverName
	I0805 22:50:43.822631   18059 main.go:141] libmachine: (addons-435364) DBG | domain addons-435364 has defined MAC address 52:54:00:99:11:e1 in network mk-addons-435364
	I0805 22:50:43.823451   18059 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I0805 22:50:43.823465   18059 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0805 22:50:43.823482   18059 main.go:141] libmachine: (addons-435364) Calling .GetSSHHostname
	I0805 22:50:43.823619   18059 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44527
	I0805 22:50:43.823919   18059 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41149
	I0805 22:50:43.824082   18059 main.go:141] libmachine: () Calling .GetVersion
	I0805 22:50:43.824300   18059 main.go:141] libmachine: Using API Version  1
	I0805 22:50:43.824315   18059 main.go:141] libmachine: () Calling .SetConfigRaw
	I0805 22:50:43.824507   18059 main.go:141] libmachine: (addons-435364) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:99:11:e1", ip: ""} in network mk-addons-435364: {Iface:virbr1 ExpiryTime:2024-08-05 23:50:03 +0000 UTC Type:0 Mac:52:54:00:99:11:e1 Iaid: IPaddr:192.168.39.129 Prefix:24 Hostname:addons-435364 Clientid:01:52:54:00:99:11:e1}
	I0805 22:50:43.824526   18059 main.go:141] libmachine: (addons-435364) DBG | domain addons-435364 has defined IP address 192.168.39.129 and MAC address 52:54:00:99:11:e1 in network mk-addons-435364
	I0805 22:50:43.824560   18059 main.go:141] libmachine: (addons-435364) Calling .DriverName
	I0805 22:50:43.824946   18059 main.go:141] libmachine: (addons-435364) Calling .GetSSHPort
	I0805 22:50:43.825036   18059 main.go:141] libmachine: () Calling .GetVersion
	I0805 22:50:43.824957   18059 main.go:141] libmachine: () Calling .GetMachineName
	I0805 22:50:43.824987   18059 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35483
	I0805 22:50:43.825257   18059 main.go:141] libmachine: (addons-435364) Calling .GetSSHKeyPath
	I0805 22:50:43.825365   18059 main.go:141] libmachine: (addons-435364) Calling .GetSSHUsername
	I0805 22:50:43.825474   18059 main.go:141] libmachine: Using API Version  1
	I0805 22:50:43.825482   18059 sshutil.go:53] new ssh client: &{IP:192.168.39.129 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19373-9606/.minikube/machines/addons-435364/id_rsa Username:docker}
	I0805 22:50:43.825493   18059 main.go:141] libmachine: () Calling .SetConfigRaw
	I0805 22:50:43.825612   18059 main.go:141] libmachine: (addons-435364) Calling .GetState
	I0805 22:50:43.825782   18059 main.go:141] libmachine: () Calling .GetMachineName
	I0805 22:50:43.825938   18059 main.go:141] libmachine: (addons-435364) Calling .GetState
	I0805 22:50:43.826159   18059 main.go:141] libmachine: Using API Version  1
	I0805 22:50:43.826177   18059 main.go:141] libmachine: () Calling .SetConfigRaw
	I0805 22:50:43.826252   18059 main.go:141] libmachine: () Calling .GetVersion
	I0805 22:50:43.826722   18059 main.go:141] libmachine: () Calling .GetMachineName
	I0805 22:50:43.826911   18059 main.go:141] libmachine: (addons-435364) Calling .GetState
	I0805 22:50:43.827152   18059 out.go:177]   - Using image registry.k8s.io/sig-storage/snapshot-controller:v6.1.0
	I0805 22:50:43.827533   18059 main.go:141] libmachine: (addons-435364) Calling .DriverName
	I0805 22:50:43.827621   18059 main.go:141] libmachine: Using API Version  1
	I0805 22:50:43.827635   18059 main.go:141] libmachine: () Calling .SetConfigRaw
	I0805 22:50:43.828069   18059 main.go:141] libmachine: () Calling .GetMachineName
	I0805 22:50:43.828111   18059 main.go:141] libmachine: (addons-435364) Calling .DriverName
	I0805 22:50:43.828235   18059 main.go:141] libmachine: (addons-435364) DBG | domain addons-435364 has defined MAC address 52:54:00:99:11:e1 in network mk-addons-435364
	I0805 22:50:43.828440   18059 main.go:141] libmachine: (addons-435364) Calling .GetState
	I0805 22:50:43.828698   18059 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml
	I0805 22:50:43.828713   18059 ssh_runner.go:362] scp volumesnapshots/csi-hostpath-snapshotclass.yaml --> /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml (934 bytes)
	I0805 22:50:43.828730   18059 main.go:141] libmachine: (addons-435364) Calling .GetSSHHostname
	I0805 22:50:43.828823   18059 main.go:141] libmachine: (addons-435364) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:99:11:e1", ip: ""} in network mk-addons-435364: {Iface:virbr1 ExpiryTime:2024-08-05 23:50:03 +0000 UTC Type:0 Mac:52:54:00:99:11:e1 Iaid: IPaddr:192.168.39.129 Prefix:24 Hostname:addons-435364 Clientid:01:52:54:00:99:11:e1}
	I0805 22:50:43.828844   18059 main.go:141] libmachine: (addons-435364) DBG | domain addons-435364 has defined IP address 192.168.39.129 and MAC address 52:54:00:99:11:e1 in network mk-addons-435364
	I0805 22:50:43.828868   18059 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45717
	I0805 22:50:43.828956   18059 main.go:141] libmachine: (addons-435364) Calling .DriverName
	I0805 22:50:43.829343   18059 main.go:141] libmachine: Making call to close driver server
	I0805 22:50:43.829358   18059 main.go:141] libmachine: (addons-435364) Calling .Close
	I0805 22:50:43.829435   18059 main.go:141] libmachine: (addons-435364) Calling .GetSSHPort
	I0805 22:50:43.829607   18059 main.go:141] libmachine: (addons-435364) DBG | Closing plugin on server side
	I0805 22:50:43.829628   18059 main.go:141] libmachine: (addons-435364) Calling .GetSSHKeyPath
	I0805 22:50:43.829704   18059 main.go:141] libmachine: Successfully made call to close driver server
	I0805 22:50:43.829716   18059 main.go:141] libmachine: Making call to close connection to plugin binary
	I0805 22:50:43.829729   18059 main.go:141] libmachine: Making call to close driver server
	I0805 22:50:43.829736   18059 main.go:141] libmachine: (addons-435364) Calling .Close
	I0805 22:50:43.829791   18059 main.go:141] libmachine: (addons-435364) Calling .GetSSHUsername
	I0805 22:50:43.829934   18059 main.go:141] libmachine: (addons-435364) DBG | Closing plugin on server side
	I0805 22:50:43.829960   18059 sshutil.go:53] new ssh client: &{IP:192.168.39.129 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19373-9606/.minikube/machines/addons-435364/id_rsa Username:docker}
	I0805 22:50:43.830197   18059 main.go:141] libmachine: Successfully made call to close driver server
	I0805 22:50:43.830208   18059 main.go:141] libmachine: Making call to close connection to plugin binary
	W0805 22:50:43.830281   18059 out.go:239] ! Enabling 'volcano' returned an error: running callbacks: [volcano addon does not support crio]
	I0805 22:50:43.830382   18059 host.go:66] Checking if "addons-435364" exists ...
	I0805 22:50:43.830628   18059 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0805 22:50:43.830640   18059 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0805 22:50:43.830764   18059 out.go:177]   - Using image nvcr.io/nvidia/k8s-device-plugin:v0.16.1
	I0805 22:50:43.831096   18059 out.go:177]   - Using image gcr.io/cloud-spanner-emulator/emulator:1.5.22
	I0805 22:50:43.831741   18059 main.go:141] libmachine: (addons-435364) DBG | domain addons-435364 has defined MAC address 52:54:00:99:11:e1 in network mk-addons-435364
	I0805 22:50:43.832145   18059 main.go:141] libmachine: (addons-435364) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:99:11:e1", ip: ""} in network mk-addons-435364: {Iface:virbr1 ExpiryTime:2024-08-05 23:50:03 +0000 UTC Type:0 Mac:52:54:00:99:11:e1 Iaid: IPaddr:192.168.39.129 Prefix:24 Hostname:addons-435364 Clientid:01:52:54:00:99:11:e1}
	I0805 22:50:43.832165   18059 main.go:141] libmachine: (addons-435364) DBG | domain addons-435364 has defined IP address 192.168.39.129 and MAC address 52:54:00:99:11:e1 in network mk-addons-435364
	I0805 22:50:43.832312   18059 main.go:141] libmachine: (addons-435364) Calling .GetSSHPort
	I0805 22:50:43.832451   18059 main.go:141] libmachine: (addons-435364) Calling .GetSSHKeyPath
	I0805 22:50:43.832557   18059 main.go:141] libmachine: (addons-435364) Calling .GetSSHUsername
	I0805 22:50:43.832692   18059 sshutil.go:53] new ssh client: &{IP:192.168.39.129 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19373-9606/.minikube/machines/addons-435364/id_rsa Username:docker}
	I0805 22:50:43.832792   18059 addons.go:431] installing /etc/kubernetes/addons/deployment.yaml
	I0805 22:50:43.832801   18059 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/deployment.yaml (1004 bytes)
	I0805 22:50:43.832815   18059 main.go:141] libmachine: (addons-435364) Calling .GetSSHHostname
	I0805 22:50:43.832886   18059 main.go:141] libmachine: () Calling .GetVersion
	I0805 22:50:43.833013   18059 addons.go:431] installing /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I0805 22:50:43.833024   18059 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/nvidia-device-plugin.yaml (1966 bytes)
	I0805 22:50:43.833038   18059 main.go:141] libmachine: (addons-435364) Calling .GetSSHHostname
	I0805 22:50:43.833450   18059 main.go:141] libmachine: Using API Version  1
	I0805 22:50:43.833460   18059 main.go:141] libmachine: () Calling .SetConfigRaw
	I0805 22:50:43.833989   18059 main.go:141] libmachine: () Calling .GetMachineName
	I0805 22:50:43.834541   18059 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0805 22:50:43.834558   18059 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0805 22:50:43.835831   18059 main.go:141] libmachine: (addons-435364) DBG | domain addons-435364 has defined MAC address 52:54:00:99:11:e1 in network mk-addons-435364
	I0805 22:50:43.836041   18059 main.go:141] libmachine: (addons-435364) DBG | domain addons-435364 has defined MAC address 52:54:00:99:11:e1 in network mk-addons-435364
	I0805 22:50:43.836173   18059 main.go:141] libmachine: (addons-435364) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:99:11:e1", ip: ""} in network mk-addons-435364: {Iface:virbr1 ExpiryTime:2024-08-05 23:50:03 +0000 UTC Type:0 Mac:52:54:00:99:11:e1 Iaid: IPaddr:192.168.39.129 Prefix:24 Hostname:addons-435364 Clientid:01:52:54:00:99:11:e1}
	I0805 22:50:43.836196   18059 main.go:141] libmachine: (addons-435364) DBG | domain addons-435364 has defined IP address 192.168.39.129 and MAC address 52:54:00:99:11:e1 in network mk-addons-435364
	I0805 22:50:43.836309   18059 main.go:141] libmachine: (addons-435364) Calling .GetSSHPort
	I0805 22:50:43.836468   18059 main.go:141] libmachine: (addons-435364) Calling .GetSSHKeyPath
	I0805 22:50:43.836517   18059 main.go:141] libmachine: (addons-435364) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:99:11:e1", ip: ""} in network mk-addons-435364: {Iface:virbr1 ExpiryTime:2024-08-05 23:50:03 +0000 UTC Type:0 Mac:52:54:00:99:11:e1 Iaid: IPaddr:192.168.39.129 Prefix:24 Hostname:addons-435364 Clientid:01:52:54:00:99:11:e1}
	I0805 22:50:43.836527   18059 main.go:141] libmachine: (addons-435364) DBG | domain addons-435364 has defined IP address 192.168.39.129 and MAC address 52:54:00:99:11:e1 in network mk-addons-435364
	I0805 22:50:43.836599   18059 main.go:141] libmachine: (addons-435364) Calling .GetSSHUsername
	I0805 22:50:43.836740   18059 sshutil.go:53] new ssh client: &{IP:192.168.39.129 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19373-9606/.minikube/machines/addons-435364/id_rsa Username:docker}
	I0805 22:50:43.836758   18059 main.go:141] libmachine: (addons-435364) Calling .GetSSHPort
	I0805 22:50:43.836877   18059 main.go:141] libmachine: (addons-435364) Calling .GetSSHKeyPath
	I0805 22:50:43.836983   18059 main.go:141] libmachine: (addons-435364) Calling .GetSSHUsername
	I0805 22:50:43.837089   18059 sshutil.go:53] new ssh client: &{IP:192.168.39.129 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19373-9606/.minikube/machines/addons-435364/id_rsa Username:docker}
	W0805 22:50:43.860255   18059 sshutil.go:64] dial failure (will retry): ssh: handshake failed: read tcp 192.168.39.1:39886->192.168.39.129:22: read: connection reset by peer
	I0805 22:50:43.860282   18059 retry.go:31] will retry after 258.36716ms: ssh: handshake failed: read tcp 192.168.39.1:39886->192.168.39.129:22: read: connection reset by peer
	I0805 22:50:43.874890   18059 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39235
	I0805 22:50:43.874892   18059 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45691
	I0805 22:50:43.875392   18059 main.go:141] libmachine: () Calling .GetVersion
	I0805 22:50:43.875474   18059 main.go:141] libmachine: () Calling .GetVersion
	I0805 22:50:43.875934   18059 main.go:141] libmachine: Using API Version  1
	I0805 22:50:43.875954   18059 main.go:141] libmachine: () Calling .SetConfigRaw
	I0805 22:50:43.876047   18059 main.go:141] libmachine: Using API Version  1
	I0805 22:50:43.876065   18059 main.go:141] libmachine: () Calling .SetConfigRaw
	I0805 22:50:43.876355   18059 main.go:141] libmachine: () Calling .GetMachineName
	I0805 22:50:43.876394   18059 main.go:141] libmachine: () Calling .GetMachineName
	I0805 22:50:43.876512   18059 main.go:141] libmachine: (addons-435364) Calling .DriverName
	I0805 22:50:43.876564   18059 main.go:141] libmachine: (addons-435364) Calling .GetState
	I0805 22:50:43.878236   18059 main.go:141] libmachine: (addons-435364) Calling .DriverName
	I0805 22:50:43.880181   18059 out.go:177]   - Using image docker.io/rancher/local-path-provisioner:v0.0.22
	I0805 22:50:43.881864   18059 out.go:177]   - Using image docker.io/busybox:stable
	I0805 22:50:43.883245   18059 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I0805 22:50:43.883262   18059 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner-rancher.yaml (3113 bytes)
	I0805 22:50:43.883279   18059 main.go:141] libmachine: (addons-435364) Calling .GetSSHHostname
	I0805 22:50:43.885681   18059 main.go:141] libmachine: (addons-435364) DBG | domain addons-435364 has defined MAC address 52:54:00:99:11:e1 in network mk-addons-435364
	I0805 22:50:43.886032   18059 main.go:141] libmachine: (addons-435364) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:99:11:e1", ip: ""} in network mk-addons-435364: {Iface:virbr1 ExpiryTime:2024-08-05 23:50:03 +0000 UTC Type:0 Mac:52:54:00:99:11:e1 Iaid: IPaddr:192.168.39.129 Prefix:24 Hostname:addons-435364 Clientid:01:52:54:00:99:11:e1}
	I0805 22:50:43.886053   18059 main.go:141] libmachine: (addons-435364) DBG | domain addons-435364 has defined IP address 192.168.39.129 and MAC address 52:54:00:99:11:e1 in network mk-addons-435364
	I0805 22:50:43.886214   18059 main.go:141] libmachine: (addons-435364) Calling .GetSSHPort
	I0805 22:50:43.886413   18059 main.go:141] libmachine: (addons-435364) Calling .GetSSHKeyPath
	I0805 22:50:43.886693   18059 main.go:141] libmachine: (addons-435364) Calling .GetSSHUsername
	I0805 22:50:43.886839   18059 sshutil.go:53] new ssh client: &{IP:192.168.39.129 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19373-9606/.minikube/machines/addons-435364/id_rsa Username:docker}
	I0805 22:50:44.179211   18059 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0805 22:50:44.179287   18059 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.39.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.30.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0805 22:50:44.318662   18059 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I0805 22:50:44.351872   18059 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0805 22:50:44.373044   18059 addons.go:431] installing /etc/kubernetes/addons/helm-tiller-rbac.yaml
	I0805 22:50:44.373068   18059 ssh_runner.go:362] scp helm-tiller/helm-tiller-rbac.yaml --> /etc/kubernetes/addons/helm-tiller-rbac.yaml (1188 bytes)
	I0805 22:50:44.375292   18059 addons.go:431] installing /etc/kubernetes/addons/yakd-sa.yaml
	I0805 22:50:44.375306   18059 ssh_runner.go:362] scp yakd/yakd-sa.yaml --> /etc/kubernetes/addons/yakd-sa.yaml (247 bytes)
	I0805 22:50:44.377480   18059 addons.go:431] installing /etc/kubernetes/addons/registry-svc.yaml
	I0805 22:50:44.377499   18059 ssh_runner.go:362] scp registry/registry-svc.yaml --> /etc/kubernetes/addons/registry-svc.yaml (398 bytes)
	I0805 22:50:44.406029   18059 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml
	I0805 22:50:44.417402   18059 node_ready.go:35] waiting up to 6m0s for node "addons-435364" to be "Ready" ...
	I0805 22:50:44.420727   18059 node_ready.go:49] node "addons-435364" has status "Ready":"True"
	I0805 22:50:44.420767   18059 node_ready.go:38] duration metric: took 3.317462ms for node "addons-435364" to be "Ready" ...
	I0805 22:50:44.420779   18059 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0805 22:50:44.433133   18059 pod_ready.go:78] waiting up to 6m0s for pod "coredns-7db6d8ff4d-ng8rk" in "kube-system" namespace to be "Ready" ...
	I0805 22:50:44.465287   18059 addons.go:431] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml
	I0805 22:50:44.465312   18059 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshotclasses.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml (6471 bytes)
	I0805 22:50:44.466385   18059 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0805 22:50:44.482026   18059 addons.go:431] installing /etc/kubernetes/addons/rbac-hostpath.yaml
	I0805 22:50:44.482048   18059 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-hostpath.yaml --> /etc/kubernetes/addons/rbac-hostpath.yaml (4266 bytes)
	I0805 22:50:44.512219   18059 addons.go:431] installing /etc/kubernetes/addons/ig-serviceaccount.yaml
	I0805 22:50:44.512240   18059 ssh_runner.go:362] scp inspektor-gadget/ig-serviceaccount.yaml --> /etc/kubernetes/addons/ig-serviceaccount.yaml (80 bytes)
	I0805 22:50:44.523110   18059 addons.go:431] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0805 22:50:44.523131   18059 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1907 bytes)
	I0805 22:50:44.542913   18059 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml
	I0805 22:50:44.555948   18059 addons.go:431] installing /etc/kubernetes/addons/helm-tiller-svc.yaml
	I0805 22:50:44.555973   18059 ssh_runner.go:362] scp helm-tiller/helm-tiller-svc.yaml --> /etc/kubernetes/addons/helm-tiller-svc.yaml (951 bytes)
	I0805 22:50:44.561851   18059 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/deployment.yaml
	I0805 22:50:44.625752   18059 addons.go:431] installing /etc/kubernetes/addons/yakd-crb.yaml
	I0805 22:50:44.625779   18059 ssh_runner.go:362] scp yakd/yakd-crb.yaml --> /etc/kubernetes/addons/yakd-crb.yaml (422 bytes)
	I0805 22:50:44.628213   18059 addons.go:431] installing /etc/kubernetes/addons/registry-proxy.yaml
	I0805 22:50:44.628240   18059 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-proxy.yaml (947 bytes)
	I0805 22:50:44.653550   18059 addons.go:431] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml
	I0805 22:50:44.653576   18059 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshotcontents.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml (23126 bytes)
	I0805 22:50:44.660568   18059 addons.go:431] installing /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml
	I0805 22:50:44.660594   18059 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-health-monitor-controller.yaml --> /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml (3038 bytes)
	I0805 22:50:44.690970   18059 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I0805 22:50:44.709226   18059 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/helm-tiller-dp.yaml -f /etc/kubernetes/addons/helm-tiller-rbac.yaml -f /etc/kubernetes/addons/helm-tiller-svc.yaml
	I0805 22:50:44.729037   18059 addons.go:431] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0805 22:50:44.729067   18059 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0805 22:50:44.752497   18059 addons.go:431] installing /etc/kubernetes/addons/ig-role.yaml
	I0805 22:50:44.752517   18059 ssh_runner.go:362] scp inspektor-gadget/ig-role.yaml --> /etc/kubernetes/addons/ig-role.yaml (210 bytes)
	I0805 22:50:44.817527   18059 addons.go:431] installing /etc/kubernetes/addons/yakd-svc.yaml
	I0805 22:50:44.817548   18059 ssh_runner.go:362] scp yakd/yakd-svc.yaml --> /etc/kubernetes/addons/yakd-svc.yaml (412 bytes)
	I0805 22:50:44.854541   18059 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml
	I0805 22:50:44.869810   18059 addons.go:431] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml
	I0805 22:50:44.869833   18059 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshots.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml (19582 bytes)
	I0805 22:50:44.872801   18059 addons.go:431] installing /etc/kubernetes/addons/rbac-external-provisioner.yaml
	I0805 22:50:44.872815   18059 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-provisioner.yaml --> /etc/kubernetes/addons/rbac-external-provisioner.yaml (4442 bytes)
	I0805 22:50:44.917015   18059 addons.go:431] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0805 22:50:44.917035   18059 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0805 22:50:44.962004   18059 addons.go:431] installing /etc/kubernetes/addons/ig-rolebinding.yaml
	I0805 22:50:44.962026   18059 ssh_runner.go:362] scp inspektor-gadget/ig-rolebinding.yaml --> /etc/kubernetes/addons/ig-rolebinding.yaml (244 bytes)
	I0805 22:50:45.080496   18059 addons.go:431] installing /etc/kubernetes/addons/yakd-dp.yaml
	I0805 22:50:45.080523   18059 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/yakd-dp.yaml (2017 bytes)
	I0805 22:50:45.119095   18059 addons.go:431] installing /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml
	I0805 22:50:45.119134   18059 ssh_runner.go:362] scp volumesnapshots/rbac-volume-snapshot-controller.yaml --> /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml (3545 bytes)
	I0805 22:50:45.130968   18059 addons.go:431] installing /etc/kubernetes/addons/rbac-external-resizer.yaml
	I0805 22:50:45.130988   18059 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-resizer.yaml --> /etc/kubernetes/addons/rbac-external-resizer.yaml (2943 bytes)
	I0805 22:50:45.160034   18059 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml
	I0805 22:50:45.223173   18059 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0805 22:50:45.249810   18059 addons.go:431] installing /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0805 22:50:45.249831   18059 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml (1475 bytes)
	I0805 22:50:45.282668   18059 addons.go:431] installing /etc/kubernetes/addons/ig-clusterrole.yaml
	I0805 22:50:45.282700   18059 ssh_runner.go:362] scp inspektor-gadget/ig-clusterrole.yaml --> /etc/kubernetes/addons/ig-clusterrole.yaml (1485 bytes)
	I0805 22:50:45.302054   18059 addons.go:431] installing /etc/kubernetes/addons/rbac-external-snapshotter.yaml
	I0805 22:50:45.302080   18059 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-snapshotter.yaml --> /etc/kubernetes/addons/rbac-external-snapshotter.yaml (3149 bytes)
	I0805 22:50:45.484584   18059 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-attacher.yaml
	I0805 22:50:45.484604   18059 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-attacher.yaml (2143 bytes)
	I0805 22:50:45.537109   18059 addons.go:431] installing /etc/kubernetes/addons/ig-clusterrolebinding.yaml
	I0805 22:50:45.537147   18059 ssh_runner.go:362] scp inspektor-gadget/ig-clusterrolebinding.yaml --> /etc/kubernetes/addons/ig-clusterrolebinding.yaml (274 bytes)
	I0805 22:50:45.608780   18059 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0805 22:50:45.635156   18059 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml
	I0805 22:50:45.635187   18059 ssh_runner.go:362] scp csi-hostpath-driver/deploy/csi-hostpath-driverinfo.yaml --> /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml (1274 bytes)
	I0805 22:50:45.636895   18059 addons.go:431] installing /etc/kubernetes/addons/ig-crd.yaml
	I0805 22:50:45.636949   18059 ssh_runner.go:362] scp inspektor-gadget/ig-crd.yaml --> /etc/kubernetes/addons/ig-crd.yaml (5216 bytes)
	I0805 22:50:45.841167   18059 addons.go:431] installing /etc/kubernetes/addons/ig-daemonset.yaml
	I0805 22:50:45.841193   18059 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-daemonset.yaml (7735 bytes)
	I0805 22:50:45.847559   18059 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-plugin.yaml
	I0805 22:50:45.847620   18059 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-plugin.yaml (8201 bytes)
	I0805 22:50:46.092936   18059 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/ig-namespace.yaml -f /etc/kubernetes/addons/ig-serviceaccount.yaml -f /etc/kubernetes/addons/ig-role.yaml -f /etc/kubernetes/addons/ig-rolebinding.yaml -f /etc/kubernetes/addons/ig-clusterrole.yaml -f /etc/kubernetes/addons/ig-clusterrolebinding.yaml -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-daemonset.yaml
	I0805 22:50:46.199292   18059 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-resizer.yaml
	I0805 22:50:46.199312   18059 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-resizer.yaml (2191 bytes)
	I0805 22:50:46.384460   18059 ssh_runner.go:235] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.39.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.30.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -": (2.205137516s)
	I0805 22:50:46.384491   18059 start.go:971] {"host.minikube.internal": 192.168.39.1} host record injected into CoreDNS's ConfigMap
	I0805 22:50:46.467646   18059 pod_ready.go:102] pod "coredns-7db6d8ff4d-ng8rk" in "kube-system" namespace has status "Ready":"False"
	I0805 22:50:46.511101   18059 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I0805 22:50:46.511128   18059 ssh_runner.go:362] scp csi-hostpath-driver/deploy/csi-hostpath-storageclass.yaml --> /etc/kubernetes/addons/csi-hostpath-storageclass.yaml (846 bytes)
	I0805 22:50:46.916480   18059 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I0805 22:50:46.940397   18059 kapi.go:214] "coredns" deployment in "kube-system" namespace and "addons-435364" context rescaled to 1 replicas
	I0805 22:50:48.540472   18059 pod_ready.go:102] pod "coredns-7db6d8ff4d-ng8rk" in "kube-system" namespace has status "Ready":"False"
	I0805 22:50:49.084244   18059 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml: (4.76554551s)
	I0805 22:50:49.084293   18059 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (4.732388526s)
	I0805 22:50:49.084304   18059 main.go:141] libmachine: Making call to close driver server
	I0805 22:50:49.084320   18059 main.go:141] libmachine: (addons-435364) Calling .Close
	I0805 22:50:49.084329   18059 main.go:141] libmachine: Making call to close driver server
	I0805 22:50:49.084344   18059 main.go:141] libmachine: (addons-435364) Calling .Close
	I0805 22:50:49.084415   18059 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml: (4.67834613s)
	I0805 22:50:49.084449   18059 main.go:141] libmachine: Making call to close driver server
	I0805 22:50:49.084461   18059 main.go:141] libmachine: (addons-435364) Calling .Close
	I0805 22:50:49.084484   18059 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (4.618070914s)
	I0805 22:50:49.084514   18059 main.go:141] libmachine: Making call to close driver server
	I0805 22:50:49.084528   18059 main.go:141] libmachine: (addons-435364) Calling .Close
	I0805 22:50:49.084845   18059 main.go:141] libmachine: Successfully made call to close driver server
	I0805 22:50:49.084870   18059 main.go:141] libmachine: Successfully made call to close driver server
	I0805 22:50:49.084880   18059 main.go:141] libmachine: Successfully made call to close driver server
	I0805 22:50:49.084894   18059 main.go:141] libmachine: Making call to close connection to plugin binary
	I0805 22:50:49.084896   18059 main.go:141] libmachine: Making call to close connection to plugin binary
	I0805 22:50:49.084902   18059 main.go:141] libmachine: Making call to close driver server
	I0805 22:50:49.084905   18059 main.go:141] libmachine: Making call to close driver server
	I0805 22:50:49.084904   18059 main.go:141] libmachine: (addons-435364) DBG | Closing plugin on server side
	I0805 22:50:49.084911   18059 main.go:141] libmachine: (addons-435364) Calling .Close
	I0805 22:50:49.084882   18059 main.go:141] libmachine: Making call to close connection to plugin binary
	I0805 22:50:49.084926   18059 main.go:141] libmachine: Making call to close driver server
	I0805 22:50:49.084929   18059 main.go:141] libmachine: Successfully made call to close driver server
	I0805 22:50:49.084932   18059 main.go:141] libmachine: (addons-435364) Calling .Close
	I0805 22:50:49.084937   18059 main.go:141] libmachine: Making call to close connection to plugin binary
	I0805 22:50:49.084945   18059 main.go:141] libmachine: Making call to close driver server
	I0805 22:50:49.084912   18059 main.go:141] libmachine: (addons-435364) Calling .Close
	I0805 22:50:49.084951   18059 main.go:141] libmachine: (addons-435364) Calling .Close
	I0805 22:50:49.084845   18059 main.go:141] libmachine: (addons-435364) DBG | Closing plugin on server side
	I0805 22:50:49.085155   18059 main.go:141] libmachine: (addons-435364) DBG | Closing plugin on server side
	I0805 22:50:49.085203   18059 main.go:141] libmachine: (addons-435364) DBG | Closing plugin on server side
	I0805 22:50:49.085218   18059 main.go:141] libmachine: (addons-435364) DBG | Closing plugin on server side
	I0805 22:50:49.085232   18059 main.go:141] libmachine: Successfully made call to close driver server
	I0805 22:50:49.085240   18059 main.go:141] libmachine: Making call to close connection to plugin binary
	I0805 22:50:49.085241   18059 main.go:141] libmachine: Successfully made call to close driver server
	I0805 22:50:49.085248   18059 main.go:141] libmachine: Making call to close connection to plugin binary
	I0805 22:50:49.085420   18059 main.go:141] libmachine: Successfully made call to close driver server
	I0805 22:50:49.085440   18059 main.go:141] libmachine: Making call to close connection to plugin binary
	I0805 22:50:49.086546   18059 main.go:141] libmachine: (addons-435364) DBG | Closing plugin on server side
	I0805 22:50:49.086561   18059 main.go:141] libmachine: Successfully made call to close driver server
	I0805 22:50:49.086581   18059 main.go:141] libmachine: Making call to close connection to plugin binary
	I0805 22:50:49.148718   18059 main.go:141] libmachine: Making call to close driver server
	I0805 22:50:49.148746   18059 main.go:141] libmachine: (addons-435364) Calling .Close
	I0805 22:50:49.149042   18059 main.go:141] libmachine: Successfully made call to close driver server
	I0805 22:50:49.149085   18059 main.go:141] libmachine: Making call to close connection to plugin binary
	W0805 22:50:49.149192   18059 out.go:239] ! Enabling 'default-storageclass' returned an error: running callbacks: [Error making standard the default storage class: Error while marking storage class local-path as non-default: Operation cannot be fulfilled on storageclasses.storage.k8s.io "local-path": the object has been modified; please apply your changes to the latest version and try again]
	I0805 22:50:49.164023   18059 main.go:141] libmachine: Making call to close driver server
	I0805 22:50:49.164043   18059 main.go:141] libmachine: (addons-435364) Calling .Close
	I0805 22:50:49.164351   18059 main.go:141] libmachine: Successfully made call to close driver server
	I0805 22:50:49.164369   18059 main.go:141] libmachine: (addons-435364) DBG | Closing plugin on server side
	I0805 22:50:49.164373   18059 main.go:141] libmachine: Making call to close connection to plugin binary
	I0805 22:50:49.440491   18059 pod_ready.go:92] pod "coredns-7db6d8ff4d-ng8rk" in "kube-system" namespace has status "Ready":"True"
	I0805 22:50:49.440513   18059 pod_ready.go:81] duration metric: took 5.007354905s for pod "coredns-7db6d8ff4d-ng8rk" in "kube-system" namespace to be "Ready" ...
	I0805 22:50:49.440522   18059 pod_ready.go:78] waiting up to 6m0s for pod "coredns-7db6d8ff4d-qc4fj" in "kube-system" namespace to be "Ready" ...
	I0805 22:50:49.445585   18059 pod_ready.go:92] pod "coredns-7db6d8ff4d-qc4fj" in "kube-system" namespace has status "Ready":"True"
	I0805 22:50:49.445604   18059 pod_ready.go:81] duration metric: took 5.075791ms for pod "coredns-7db6d8ff4d-qc4fj" in "kube-system" namespace to be "Ready" ...
	I0805 22:50:49.445613   18059 pod_ready.go:78] waiting up to 6m0s for pod "etcd-addons-435364" in "kube-system" namespace to be "Ready" ...
	I0805 22:50:49.450306   18059 pod_ready.go:92] pod "etcd-addons-435364" in "kube-system" namespace has status "Ready":"True"
	I0805 22:50:49.450331   18059 pod_ready.go:81] duration metric: took 4.710521ms for pod "etcd-addons-435364" in "kube-system" namespace to be "Ready" ...
	I0805 22:50:49.450347   18059 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-addons-435364" in "kube-system" namespace to be "Ready" ...
	I0805 22:50:49.455933   18059 pod_ready.go:92] pod "kube-apiserver-addons-435364" in "kube-system" namespace has status "Ready":"True"
	I0805 22:50:49.455962   18059 pod_ready.go:81] duration metric: took 5.604264ms for pod "kube-apiserver-addons-435364" in "kube-system" namespace to be "Ready" ...
	I0805 22:50:49.455974   18059 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-addons-435364" in "kube-system" namespace to be "Ready" ...
	I0805 22:50:49.460394   18059 pod_ready.go:92] pod "kube-controller-manager-addons-435364" in "kube-system" namespace has status "Ready":"True"
	I0805 22:50:49.460419   18059 pod_ready.go:81] duration metric: took 4.436596ms for pod "kube-controller-manager-addons-435364" in "kube-system" namespace to be "Ready" ...
	I0805 22:50:49.460431   18059 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-lt8r2" in "kube-system" namespace to be "Ready" ...
	I0805 22:50:49.846279   18059 pod_ready.go:92] pod "kube-proxy-lt8r2" in "kube-system" namespace has status "Ready":"True"
	I0805 22:50:49.846311   18059 pod_ready.go:81] duration metric: took 385.870407ms for pod "kube-proxy-lt8r2" in "kube-system" namespace to be "Ready" ...
	I0805 22:50:49.846324   18059 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-addons-435364" in "kube-system" namespace to be "Ready" ...
	I0805 22:50:50.237689   18059 pod_ready.go:92] pod "kube-scheduler-addons-435364" in "kube-system" namespace has status "Ready":"True"
	I0805 22:50:50.237717   18059 pod_ready.go:81] duration metric: took 391.384837ms for pod "kube-scheduler-addons-435364" in "kube-system" namespace to be "Ready" ...
	I0805 22:50:50.237728   18059 pod_ready.go:38] duration metric: took 5.816931704s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0805 22:50:50.237746   18059 api_server.go:52] waiting for apiserver process to appear ...
	I0805 22:50:50.237808   18059 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0805 22:50:50.887724   18059 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_application_credentials.json (162 bytes)
	I0805 22:50:50.887759   18059 main.go:141] libmachine: (addons-435364) Calling .GetSSHHostname
	I0805 22:50:50.890430   18059 main.go:141] libmachine: (addons-435364) DBG | domain addons-435364 has defined MAC address 52:54:00:99:11:e1 in network mk-addons-435364
	I0805 22:50:50.890825   18059 main.go:141] libmachine: (addons-435364) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:99:11:e1", ip: ""} in network mk-addons-435364: {Iface:virbr1 ExpiryTime:2024-08-05 23:50:03 +0000 UTC Type:0 Mac:52:54:00:99:11:e1 Iaid: IPaddr:192.168.39.129 Prefix:24 Hostname:addons-435364 Clientid:01:52:54:00:99:11:e1}
	I0805 22:50:50.890854   18059 main.go:141] libmachine: (addons-435364) DBG | domain addons-435364 has defined IP address 192.168.39.129 and MAC address 52:54:00:99:11:e1 in network mk-addons-435364
	I0805 22:50:50.891067   18059 main.go:141] libmachine: (addons-435364) Calling .GetSSHPort
	I0805 22:50:50.891249   18059 main.go:141] libmachine: (addons-435364) Calling .GetSSHKeyPath
	I0805 22:50:50.891408   18059 main.go:141] libmachine: (addons-435364) Calling .GetSSHUsername
	I0805 22:50:50.891567   18059 sshutil.go:53] new ssh client: &{IP:192.168.39.129 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19373-9606/.minikube/machines/addons-435364/id_rsa Username:docker}
	I0805 22:50:51.342921   18059 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_cloud_project (12 bytes)
	I0805 22:50:51.486984   18059 addons.go:234] Setting addon gcp-auth=true in "addons-435364"
	I0805 22:50:51.487037   18059 host.go:66] Checking if "addons-435364" exists ...
	I0805 22:50:51.487344   18059 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0805 22:50:51.487376   18059 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0805 22:50:51.502034   18059 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38685
	I0805 22:50:51.502502   18059 main.go:141] libmachine: () Calling .GetVersion
	I0805 22:50:51.503009   18059 main.go:141] libmachine: Using API Version  1
	I0805 22:50:51.503029   18059 main.go:141] libmachine: () Calling .SetConfigRaw
	I0805 22:50:51.503428   18059 main.go:141] libmachine: () Calling .GetMachineName
	I0805 22:50:51.503865   18059 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0805 22:50:51.503891   18059 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0805 22:50:51.518846   18059 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43837
	I0805 22:50:51.519325   18059 main.go:141] libmachine: () Calling .GetVersion
	I0805 22:50:51.519856   18059 main.go:141] libmachine: Using API Version  1
	I0805 22:50:51.519903   18059 main.go:141] libmachine: () Calling .SetConfigRaw
	I0805 22:50:51.520237   18059 main.go:141] libmachine: () Calling .GetMachineName
	I0805 22:50:51.520405   18059 main.go:141] libmachine: (addons-435364) Calling .GetState
	I0805 22:50:51.521878   18059 main.go:141] libmachine: (addons-435364) Calling .DriverName
	I0805 22:50:51.522124   18059 ssh_runner.go:195] Run: cat /var/lib/minikube/google_application_credentials.json
	I0805 22:50:51.522143   18059 main.go:141] libmachine: (addons-435364) Calling .GetSSHHostname
	I0805 22:50:51.524687   18059 main.go:141] libmachine: (addons-435364) DBG | domain addons-435364 has defined MAC address 52:54:00:99:11:e1 in network mk-addons-435364
	I0805 22:50:51.525134   18059 main.go:141] libmachine: (addons-435364) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:99:11:e1", ip: ""} in network mk-addons-435364: {Iface:virbr1 ExpiryTime:2024-08-05 23:50:03 +0000 UTC Type:0 Mac:52:54:00:99:11:e1 Iaid: IPaddr:192.168.39.129 Prefix:24 Hostname:addons-435364 Clientid:01:52:54:00:99:11:e1}
	I0805 22:50:51.525159   18059 main.go:141] libmachine: (addons-435364) DBG | domain addons-435364 has defined IP address 192.168.39.129 and MAC address 52:54:00:99:11:e1 in network mk-addons-435364
	I0805 22:50:51.525164   18059 main.go:141] libmachine: (addons-435364) Calling .GetSSHPort
	I0805 22:50:51.525306   18059 main.go:141] libmachine: (addons-435364) Calling .GetSSHKeyPath
	I0805 22:50:51.525448   18059 main.go:141] libmachine: (addons-435364) Calling .GetSSHUsername
	I0805 22:50:51.525559   18059 sshutil.go:53] new ssh client: &{IP:192.168.39.129 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19373-9606/.minikube/machines/addons-435364/id_rsa Username:docker}
	I0805 22:50:52.883768   18059 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml: (8.340815998s)
	I0805 22:50:52.883819   18059 main.go:141] libmachine: Making call to close driver server
	I0805 22:50:52.883833   18059 main.go:141] libmachine: (addons-435364) Calling .Close
	I0805 22:50:52.883860   18059 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/deployment.yaml: (8.32198163s)
	I0805 22:50:52.883901   18059 main.go:141] libmachine: Making call to close driver server
	I0805 22:50:52.883916   18059 main.go:141] libmachine: (addons-435364) Calling .Close
	I0805 22:50:52.883956   18059 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/helm-tiller-dp.yaml -f /etc/kubernetes/addons/helm-tiller-rbac.yaml -f /etc/kubernetes/addons/helm-tiller-svc.yaml: (8.174704669s)
	I0805 22:50:52.883917   18059 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml: (8.19291952s)
	I0805 22:50:52.883988   18059 main.go:141] libmachine: Making call to close driver server
	I0805 22:50:52.884000   18059 main.go:141] libmachine: (addons-435364) Calling .Close
	I0805 22:50:52.884005   18059 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml: (8.029430947s)
	I0805 22:50:52.883989   18059 main.go:141] libmachine: Making call to close driver server
	I0805 22:50:52.884039   18059 main.go:141] libmachine: Making call to close driver server
	I0805 22:50:52.884055   18059 main.go:141] libmachine: (addons-435364) Calling .Close
	I0805 22:50:52.884056   18059 main.go:141] libmachine: (addons-435364) Calling .Close
	I0805 22:50:52.884053   18059 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml: (7.723989148s)
	I0805 22:50:52.884097   18059 main.go:141] libmachine: Making call to close driver server
	I0805 22:50:52.884104   18059 main.go:141] libmachine: (addons-435364) Calling .Close
	I0805 22:50:52.884132   18059 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (7.660936427s)
	I0805 22:50:52.884151   18059 main.go:141] libmachine: Making call to close driver server
	I0805 22:50:52.884181   18059 main.go:141] libmachine: (addons-435364) Calling .Close
	I0805 22:50:52.884201   18059 main.go:141] libmachine: (addons-435364) DBG | Closing plugin on server side
	I0805 22:50:52.884237   18059 main.go:141] libmachine: Successfully made call to close driver server
	I0805 22:50:52.884244   18059 main.go:141] libmachine: Making call to close connection to plugin binary
	I0805 22:50:52.884252   18059 main.go:141] libmachine: Making call to close driver server
	I0805 22:50:52.884259   18059 main.go:141] libmachine: (addons-435364) Calling .Close
	I0805 22:50:52.884310   18059 main.go:141] libmachine: Successfully made call to close driver server
	I0805 22:50:52.884317   18059 main.go:141] libmachine: Making call to close connection to plugin binary
	I0805 22:50:52.884324   18059 main.go:141] libmachine: Making call to close driver server
	I0805 22:50:52.884324   18059 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (7.275508668s)
	I0805 22:50:52.884332   18059 main.go:141] libmachine: (addons-435364) Calling .Close
	W0805 22:50:52.884351   18059 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I0805 22:50:52.884383   18059 retry.go:31] will retry after 291.464679ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I0805 22:50:52.884441   18059 main.go:141] libmachine: Successfully made call to close driver server
	I0805 22:50:52.884451   18059 main.go:141] libmachine: Making call to close connection to plugin binary
	I0805 22:50:52.884459   18059 main.go:141] libmachine: Making call to close driver server
	I0805 22:50:52.884466   18059 main.go:141] libmachine: (addons-435364) Calling .Close
	I0805 22:50:52.884493   18059 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/ig-namespace.yaml -f /etc/kubernetes/addons/ig-serviceaccount.yaml -f /etc/kubernetes/addons/ig-role.yaml -f /etc/kubernetes/addons/ig-rolebinding.yaml -f /etc/kubernetes/addons/ig-clusterrole.yaml -f /etc/kubernetes/addons/ig-clusterrolebinding.yaml -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-daemonset.yaml: (6.791499664s)
	I0805 22:50:52.884511   18059 main.go:141] libmachine: Making call to close driver server
	I0805 22:50:52.884522   18059 main.go:141] libmachine: (addons-435364) Calling .Close
	I0805 22:50:52.884529   18059 main.go:141] libmachine: (addons-435364) DBG | Closing plugin on server side
	I0805 22:50:52.884549   18059 main.go:141] libmachine: Successfully made call to close driver server
	I0805 22:50:52.884556   18059 main.go:141] libmachine: Making call to close connection to plugin binary
	I0805 22:50:52.884564   18059 main.go:141] libmachine: Making call to close driver server
	I0805 22:50:52.884570   18059 main.go:141] libmachine: (addons-435364) Calling .Close
	I0805 22:50:52.884595   18059 main.go:141] libmachine: Successfully made call to close driver server
	I0805 22:50:52.884603   18059 main.go:141] libmachine: Making call to close connection to plugin binary
	I0805 22:50:52.884612   18059 main.go:141] libmachine: Making call to close driver server
	I0805 22:50:52.884619   18059 main.go:141] libmachine: (addons-435364) Calling .Close
	I0805 22:50:52.884680   18059 main.go:141] libmachine: (addons-435364) DBG | Closing plugin on server side
	I0805 22:50:52.884700   18059 main.go:141] libmachine: Successfully made call to close driver server
	I0805 22:50:52.884708   18059 main.go:141] libmachine: Making call to close connection to plugin binary
	I0805 22:50:52.884714   18059 main.go:141] libmachine: Making call to close driver server
	I0805 22:50:52.884721   18059 main.go:141] libmachine: (addons-435364) Calling .Close
	I0805 22:50:52.886002   18059 main.go:141] libmachine: (addons-435364) DBG | Closing plugin on server side
	I0805 22:50:52.886034   18059 main.go:141] libmachine: Successfully made call to close driver server
	I0805 22:50:52.886042   18059 main.go:141] libmachine: Making call to close connection to plugin binary
	I0805 22:50:52.886058   18059 addons.go:475] Verifying addon ingress=true in "addons-435364"
	I0805 22:50:52.886293   18059 main.go:141] libmachine: (addons-435364) DBG | Closing plugin on server side
	I0805 22:50:52.886326   18059 main.go:141] libmachine: Successfully made call to close driver server
	I0805 22:50:52.886333   18059 main.go:141] libmachine: Making call to close connection to plugin binary
	I0805 22:50:52.886732   18059 main.go:141] libmachine: (addons-435364) DBG | Closing plugin on server side
	I0805 22:50:52.886762   18059 main.go:141] libmachine: Successfully made call to close driver server
	I0805 22:50:52.886783   18059 main.go:141] libmachine: Making call to close connection to plugin binary
	I0805 22:50:52.886796   18059 addons.go:475] Verifying addon metrics-server=true in "addons-435364"
	I0805 22:50:52.886797   18059 main.go:141] libmachine: Successfully made call to close driver server
	I0805 22:50:52.886815   18059 main.go:141] libmachine: Making call to close connection to plugin binary
	I0805 22:50:52.886826   18059 main.go:141] libmachine: Making call to close driver server
	I0805 22:50:52.886830   18059 main.go:141] libmachine: (addons-435364) DBG | Closing plugin on server side
	I0805 22:50:52.886836   18059 main.go:141] libmachine: (addons-435364) Calling .Close
	I0805 22:50:52.886852   18059 main.go:141] libmachine: (addons-435364) DBG | Closing plugin on server side
	I0805 22:50:52.886867   18059 main.go:141] libmachine: (addons-435364) DBG | Closing plugin on server side
	I0805 22:50:52.887444   18059 main.go:141] libmachine: (addons-435364) DBG | Closing plugin on server side
	I0805 22:50:52.887477   18059 main.go:141] libmachine: Successfully made call to close driver server
	I0805 22:50:52.887485   18059 main.go:141] libmachine: Making call to close connection to plugin binary
	I0805 22:50:52.887777   18059 main.go:141] libmachine: (addons-435364) DBG | Closing plugin on server side
	I0805 22:50:52.887812   18059 main.go:141] libmachine: Successfully made call to close driver server
	I0805 22:50:52.887823   18059 main.go:141] libmachine: Making call to close connection to plugin binary
	I0805 22:50:52.887872   18059 main.go:141] libmachine: Successfully made call to close driver server
	I0805 22:50:52.887891   18059 main.go:141] libmachine: Making call to close connection to plugin binary
	I0805 22:50:52.886765   18059 main.go:141] libmachine: (addons-435364) DBG | Closing plugin on server side
	I0805 22:50:52.886817   18059 main.go:141] libmachine: Successfully made call to close driver server
	I0805 22:50:52.887949   18059 main.go:141] libmachine: Making call to close connection to plugin binary
	I0805 22:50:52.888147   18059 out.go:177] * Verifying ingress addon...
	I0805 22:50:52.888699   18059 main.go:141] libmachine: Successfully made call to close driver server
	I0805 22:50:52.888716   18059 main.go:141] libmachine: Making call to close connection to plugin binary
	I0805 22:50:52.888724   18059 main.go:141] libmachine: Making call to close driver server
	I0805 22:50:52.888731   18059 main.go:141] libmachine: (addons-435364) Calling .Close
	I0805 22:50:52.889189   18059 main.go:141] libmachine: (addons-435364) DBG | Closing plugin on server side
	I0805 22:50:52.889223   18059 main.go:141] libmachine: Successfully made call to close driver server
	I0805 22:50:52.889230   18059 main.go:141] libmachine: Making call to close connection to plugin binary
	I0805 22:50:52.889239   18059 addons.go:475] Verifying addon registry=true in "addons-435364"
	I0805 22:50:52.889458   18059 out.go:177] * To access YAKD - Kubernetes Dashboard, wait for Pod to be ready and run the following command:
	
		minikube -p addons-435364 service yakd-dashboard -n yakd-dashboard
	
	I0805 22:50:52.890368   18059 kapi.go:75] Waiting for pod with label "app.kubernetes.io/name=ingress-nginx" in ns "ingress-nginx" ...
	I0805 22:50:52.890673   18059 out.go:177] * Verifying registry addon...
	I0805 22:50:52.892712   18059 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=registry" in ns "kube-system" ...
	I0805 22:50:52.911983   18059 kapi.go:86] Found 3 Pods for label selector app.kubernetes.io/name=ingress-nginx
	I0805 22:50:52.912007   18059 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0805 22:50:52.916782   18059 kapi.go:86] Found 2 Pods for label selector kubernetes.io/minikube-addons=registry
	I0805 22:50:52.916800   18059 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0805 22:50:53.176847   18059 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0805 22:50:53.403966   18059 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0805 22:50:53.406283   18059 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0805 22:50:53.873455   18059 ssh_runner.go:235] Completed: sudo pgrep -xnf kube-apiserver.*minikube.*: (3.635622169s)
	I0805 22:50:53.873470   18059 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml: (6.956933725s)
	I0805 22:50:53.873490   18059 api_server.go:72] duration metric: took 10.196307174s to wait for apiserver process to appear ...
	I0805 22:50:53.873497   18059 api_server.go:88] waiting for apiserver healthz status ...
	I0805 22:50:53.873518   18059 api_server.go:253] Checking apiserver healthz at https://192.168.39.129:8443/healthz ...
	I0805 22:50:53.873517   18059 main.go:141] libmachine: Making call to close driver server
	I0805 22:50:53.873523   18059 ssh_runner.go:235] Completed: cat /var/lib/minikube/google_application_credentials.json: (2.35138169s)
	I0805 22:50:53.873530   18059 main.go:141] libmachine: (addons-435364) Calling .Close
	I0805 22:50:53.873878   18059 main.go:141] libmachine: Successfully made call to close driver server
	I0805 22:50:53.873893   18059 main.go:141] libmachine: Making call to close connection to plugin binary
	I0805 22:50:53.873908   18059 main.go:141] libmachine: Making call to close driver server
	I0805 22:50:53.873916   18059 main.go:141] libmachine: (addons-435364) Calling .Close
	I0805 22:50:53.874197   18059 main.go:141] libmachine: Successfully made call to close driver server
	I0805 22:50:53.874218   18059 main.go:141] libmachine: Making call to close connection to plugin binary
	I0805 22:50:53.874202   18059 main.go:141] libmachine: (addons-435364) DBG | Closing plugin on server side
	I0805 22:50:53.874229   18059 addons.go:475] Verifying addon csi-hostpath-driver=true in "addons-435364"
	I0805 22:50:53.875063   18059 out.go:177]   - Using image gcr.io/k8s-minikube/gcp-auth-webhook:v0.1.2
	I0805 22:50:53.875924   18059 out.go:177] * Verifying csi-hostpath-driver addon...
	I0805 22:50:53.877314   18059 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.4.1
	I0805 22:50:53.878276   18059 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=csi-hostpath-driver" in ns "kube-system" ...
	I0805 22:50:53.878468   18059 addons.go:431] installing /etc/kubernetes/addons/gcp-auth-ns.yaml
	I0805 22:50:53.878479   18059 ssh_runner.go:362] scp gcp-auth/gcp-auth-ns.yaml --> /etc/kubernetes/addons/gcp-auth-ns.yaml (700 bytes)
	I0805 22:50:53.921505   18059 api_server.go:279] https://192.168.39.129:8443/healthz returned 200:
	ok
	I0805 22:50:53.924856   18059 api_server.go:141] control plane version: v1.30.3
	I0805 22:50:53.924878   18059 api_server.go:131] duration metric: took 51.374856ms to wait for apiserver health ...
	I0805 22:50:53.924886   18059 system_pods.go:43] waiting for kube-system pods to appear ...
	I0805 22:50:53.931844   18059 kapi.go:86] Found 3 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
	I0805 22:50:53.931865   18059 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0805 22:50:53.937825   18059 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0805 22:50:53.949822   18059 addons.go:431] installing /etc/kubernetes/addons/gcp-auth-service.yaml
	I0805 22:50:53.949841   18059 ssh_runner.go:362] scp gcp-auth/gcp-auth-service.yaml --> /etc/kubernetes/addons/gcp-auth-service.yaml (788 bytes)
	I0805 22:50:53.955641   18059 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0805 22:50:53.971438   18059 system_pods.go:59] 19 kube-system pods found
	I0805 22:50:53.971462   18059 system_pods.go:61] "coredns-7db6d8ff4d-ng8rk" [2091f1e9-b1aa-45fd-8197-0f661fcf784e] Running
	I0805 22:50:53.971466   18059 system_pods.go:61] "coredns-7db6d8ff4d-qc4fj" [2374285d-3c1f-4403-a6a7-c6bfd6ea2be9] Running
	I0805 22:50:53.971472   18059 system_pods.go:61] "csi-hostpath-attacher-0" [c3d74a8e-fdb7-463c-8ed0-89f152a701f1] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I0805 22:50:53.971476   18059 system_pods.go:61] "csi-hostpath-resizer-0" [2977ed62-99ff-4c08-8e71-b4f0c9bf67d3] Pending
	I0805 22:50:53.971484   18059 system_pods.go:61] "csi-hostpathplugin-sb9bm" [ca40b966-32e8-4c43-8ce2-7574141f44b2] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I0805 22:50:53.971491   18059 system_pods.go:61] "etcd-addons-435364" [575da880-ae3c-4192-95aa-5c659f5ccb5d] Running
	I0805 22:50:53.971495   18059 system_pods.go:61] "kube-apiserver-addons-435364" [45f478e1-eebb-4cde-bde2-f4d32decde9e] Running
	I0805 22:50:53.971498   18059 system_pods.go:61] "kube-controller-manager-addons-435364" [a9924751-aef6-4ba5-b29b-63491edecb83] Running
	I0805 22:50:53.971503   18059 system_pods.go:61] "kube-ingress-dns-minikube" [a3229854-d9da-4ed8-ad6f-5a4b35dd430f] Pending / Ready:ContainersNotReady (containers with unready status: [minikube-ingress-dns]) / ContainersReady:ContainersNotReady (containers with unready status: [minikube-ingress-dns])
	I0805 22:50:53.971508   18059 system_pods.go:61] "kube-proxy-lt8r2" [c1a7c99c-379f-4e2d-b241-4de97adffa76] Running
	I0805 22:50:53.971511   18059 system_pods.go:61] "kube-scheduler-addons-435364" [127dd332-e714-4512-9460-acc0e7b194ff] Running
	I0805 22:50:53.971515   18059 system_pods.go:61] "metrics-server-c59844bb4-m9t52" [f825462d-de15-4aa7-9436-76eda3bbd66f] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0805 22:50:53.971523   18059 system_pods.go:61] "nvidia-device-plugin-daemonset-jk9q5" [1a23f5f9-2fc4-453c-9381-177bf606032d] Pending / Ready:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr]) / ContainersReady:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr])
	I0805 22:50:53.971528   18059 system_pods.go:61] "registry-698f998955-4stmn" [c0716044-6d96-44a5-ab8d-03023e2da298] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I0805 22:50:53.971535   18059 system_pods.go:61] "registry-proxy-2dplh" [a8ad0955-3945-41ac-a7b2-78bf1d724a1a] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I0805 22:50:53.971540   18059 system_pods.go:61] "snapshot-controller-745499f584-7jwrf" [19b31468-b55d-4eb4-a008-7b9b9af0e582] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I0805 22:50:53.971547   18059 system_pods.go:61] "snapshot-controller-745499f584-lphmq" [24eb6083-c3a3-4873-8a71-0e4c16b350ff] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I0805 22:50:53.971551   18059 system_pods.go:61] "storage-provisioner" [cfbc5ee9-491f-4c8d-aecc-72ba061092ec] Running
	I0805 22:50:53.971557   18059 system_pods.go:61] "tiller-deploy-6677d64bcd-qn6ln" [4188df06-7e5f-4218-bf0f-658f8c51bfb9] Pending / Ready:ContainersNotReady (containers with unready status: [tiller]) / ContainersReady:ContainersNotReady (containers with unready status: [tiller])
	I0805 22:50:53.971564   18059 system_pods.go:74] duration metric: took 46.673256ms to wait for pod list to return data ...
	I0805 22:50:53.971573   18059 default_sa.go:34] waiting for default service account to be created ...
	I0805 22:50:53.977852   18059 default_sa.go:45] found service account: "default"
	I0805 22:50:53.977871   18059 default_sa.go:55] duration metric: took 6.291945ms for default service account to be created ...
	I0805 22:50:53.977878   18059 system_pods.go:116] waiting for k8s-apps to be running ...
	I0805 22:50:54.005241   18059 system_pods.go:86] 19 kube-system pods found
	I0805 22:50:54.005271   18059 system_pods.go:89] "coredns-7db6d8ff4d-ng8rk" [2091f1e9-b1aa-45fd-8197-0f661fcf784e] Running
	I0805 22:50:54.005278   18059 system_pods.go:89] "coredns-7db6d8ff4d-qc4fj" [2374285d-3c1f-4403-a6a7-c6bfd6ea2be9] Running
	I0805 22:50:54.005287   18059 system_pods.go:89] "csi-hostpath-attacher-0" [c3d74a8e-fdb7-463c-8ed0-89f152a701f1] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I0805 22:50:54.005296   18059 system_pods.go:89] "csi-hostpath-resizer-0" [2977ed62-99ff-4c08-8e71-b4f0c9bf67d3] Pending / Ready:ContainersNotReady (containers with unready status: [csi-resizer]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-resizer])
	I0805 22:50:54.005311   18059 system_pods.go:89] "csi-hostpathplugin-sb9bm" [ca40b966-32e8-4c43-8ce2-7574141f44b2] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I0805 22:50:54.005322   18059 system_pods.go:89] "etcd-addons-435364" [575da880-ae3c-4192-95aa-5c659f5ccb5d] Running
	I0805 22:50:54.005333   18059 system_pods.go:89] "kube-apiserver-addons-435364" [45f478e1-eebb-4cde-bde2-f4d32decde9e] Running
	I0805 22:50:54.005341   18059 system_pods.go:89] "kube-controller-manager-addons-435364" [a9924751-aef6-4ba5-b29b-63491edecb83] Running
	I0805 22:50:54.005354   18059 system_pods.go:89] "kube-ingress-dns-minikube" [a3229854-d9da-4ed8-ad6f-5a4b35dd430f] Pending / Ready:ContainersNotReady (containers with unready status: [minikube-ingress-dns]) / ContainersReady:ContainersNotReady (containers with unready status: [minikube-ingress-dns])
	I0805 22:50:54.005360   18059 system_pods.go:89] "kube-proxy-lt8r2" [c1a7c99c-379f-4e2d-b241-4de97adffa76] Running
	I0805 22:50:54.005366   18059 system_pods.go:89] "kube-scheduler-addons-435364" [127dd332-e714-4512-9460-acc0e7b194ff] Running
	I0805 22:50:54.005375   18059 system_pods.go:89] "metrics-server-c59844bb4-m9t52" [f825462d-de15-4aa7-9436-76eda3bbd66f] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0805 22:50:54.005389   18059 system_pods.go:89] "nvidia-device-plugin-daemonset-jk9q5" [1a23f5f9-2fc4-453c-9381-177bf606032d] Pending / Ready:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr]) / ContainersReady:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr])
	I0805 22:50:54.005402   18059 system_pods.go:89] "registry-698f998955-4stmn" [c0716044-6d96-44a5-ab8d-03023e2da298] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I0805 22:50:54.005415   18059 system_pods.go:89] "registry-proxy-2dplh" [a8ad0955-3945-41ac-a7b2-78bf1d724a1a] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I0805 22:50:54.005427   18059 system_pods.go:89] "snapshot-controller-745499f584-7jwrf" [19b31468-b55d-4eb4-a008-7b9b9af0e582] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I0805 22:50:54.005442   18059 system_pods.go:89] "snapshot-controller-745499f584-lphmq" [24eb6083-c3a3-4873-8a71-0e4c16b350ff] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I0805 22:50:54.005452   18059 system_pods.go:89] "storage-provisioner" [cfbc5ee9-491f-4c8d-aecc-72ba061092ec] Running
	I0805 22:50:54.005463   18059 system_pods.go:89] "tiller-deploy-6677d64bcd-qn6ln" [4188df06-7e5f-4218-bf0f-658f8c51bfb9] Pending / Ready:ContainersNotReady (containers with unready status: [tiller]) / ContainersReady:ContainersNotReady (containers with unready status: [tiller])
	I0805 22:50:54.005475   18059 system_pods.go:126] duration metric: took 27.590484ms to wait for k8s-apps to be running ...
	I0805 22:50:54.005490   18059 system_svc.go:44] waiting for kubelet service to be running ....
	I0805 22:50:54.005538   18059 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0805 22:50:54.017525   18059 addons.go:431] installing /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I0805 22:50:54.017547   18059 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/gcp-auth-webhook.yaml (5421 bytes)
	I0805 22:50:54.080589   18059 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/gcp-auth-ns.yaml -f /etc/kubernetes/addons/gcp-auth-service.yaml -f /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I0805 22:50:54.392993   18059 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0805 22:50:54.394879   18059 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0805 22:50:54.411308   18059 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0805 22:50:54.886746   18059 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0805 22:50:54.894681   18059 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0805 22:50:54.898162   18059 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0805 22:50:55.384553   18059 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0805 22:50:55.395320   18059 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0805 22:50:55.398149   18059 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0805 22:50:55.450511   18059 ssh_runner.go:235] Completed: sudo systemctl is-active --quiet service kubelet: (1.444950765s)
	I0805 22:50:55.450546   18059 system_svc.go:56] duration metric: took 1.445055617s WaitForService to wait for kubelet
	I0805 22:50:55.450558   18059 kubeadm.go:582] duration metric: took 11.773374958s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0805 22:50:55.450577   18059 node_conditions.go:102] verifying NodePressure condition ...
	I0805 22:50:55.450509   18059 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (2.273605615s)
	I0805 22:50:55.450656   18059 main.go:141] libmachine: Making call to close driver server
	I0805 22:50:55.450670   18059 main.go:141] libmachine: (addons-435364) Calling .Close
	I0805 22:50:55.450931   18059 main.go:141] libmachine: Successfully made call to close driver server
	I0805 22:50:55.450944   18059 main.go:141] libmachine: Making call to close connection to plugin binary
	I0805 22:50:55.450952   18059 main.go:141] libmachine: Making call to close driver server
	I0805 22:50:55.450959   18059 main.go:141] libmachine: (addons-435364) Calling .Close
	I0805 22:50:55.451218   18059 main.go:141] libmachine: (addons-435364) DBG | Closing plugin on server side
	I0805 22:50:55.451278   18059 main.go:141] libmachine: Successfully made call to close driver server
	I0805 22:50:55.451293   18059 main.go:141] libmachine: Making call to close connection to plugin binary
	I0805 22:50:55.453813   18059 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0805 22:50:55.453833   18059 node_conditions.go:123] node cpu capacity is 2
	I0805 22:50:55.453844   18059 node_conditions.go:105] duration metric: took 3.261865ms to run NodePressure ...
	I0805 22:50:55.453855   18059 start.go:241] waiting for startup goroutines ...
	I0805 22:50:55.794206   18059 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/gcp-auth-ns.yaml -f /etc/kubernetes/addons/gcp-auth-service.yaml -f /etc/kubernetes/addons/gcp-auth-webhook.yaml: (1.713578108s)
	I0805 22:50:55.794258   18059 main.go:141] libmachine: Making call to close driver server
	I0805 22:50:55.794270   18059 main.go:141] libmachine: (addons-435364) Calling .Close
	I0805 22:50:55.794523   18059 main.go:141] libmachine: Successfully made call to close driver server
	I0805 22:50:55.794571   18059 main.go:141] libmachine: Making call to close connection to plugin binary
	I0805 22:50:55.794592   18059 main.go:141] libmachine: Making call to close driver server
	I0805 22:50:55.794614   18059 main.go:141] libmachine: (addons-435364) Calling .Close
	I0805 22:50:55.794845   18059 main.go:141] libmachine: Successfully made call to close driver server
	I0805 22:50:55.794864   18059 main.go:141] libmachine: Making call to close connection to plugin binary
	I0805 22:50:55.796735   18059 addons.go:475] Verifying addon gcp-auth=true in "addons-435364"
	I0805 22:50:55.799779   18059 out.go:177] * Verifying gcp-auth addon...
	I0805 22:50:55.801747   18059 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=gcp-auth" in ns "gcp-auth" ...
	I0805 22:50:55.834203   18059 kapi.go:86] Found 1 Pods for label selector kubernetes.io/minikube-addons=gcp-auth
	I0805 22:50:55.834227   18059 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0805 22:50:55.903132   18059 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0805 22:50:55.932737   18059 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0805 22:50:55.942611   18059 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0805 22:50:56.306547   18059 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0805 22:50:56.390604   18059 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0805 22:50:56.401418   18059 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0805 22:50:56.407342   18059 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0805 22:50:56.805992   18059 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0805 22:50:56.886789   18059 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0805 22:50:56.895416   18059 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0805 22:50:56.897303   18059 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0805 22:50:57.305792   18059 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0805 22:50:57.385742   18059 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0805 22:50:57.394163   18059 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0805 22:50:57.396395   18059 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0805 22:50:57.805904   18059 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0805 22:50:57.884355   18059 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0805 22:50:57.895240   18059 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0805 22:50:57.897766   18059 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0805 22:50:58.306415   18059 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0805 22:50:58.384804   18059 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0805 22:50:58.396291   18059 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0805 22:50:58.398580   18059 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0805 22:50:58.805970   18059 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0805 22:50:58.884085   18059 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0805 22:50:58.894366   18059 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0805 22:50:58.897088   18059 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0805 22:50:59.305934   18059 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0805 22:50:59.383822   18059 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0805 22:50:59.394349   18059 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0805 22:50:59.397241   18059 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0805 22:50:59.806057   18059 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0805 22:50:59.885080   18059 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0805 22:50:59.896240   18059 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0805 22:50:59.899452   18059 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0805 22:51:00.306082   18059 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0805 22:51:00.383603   18059 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0805 22:51:00.394623   18059 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0805 22:51:00.397443   18059 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0805 22:51:00.806127   18059 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0805 22:51:00.885914   18059 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0805 22:51:00.895131   18059 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0805 22:51:00.897258   18059 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0805 22:51:01.306509   18059 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0805 22:51:01.388270   18059 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0805 22:51:01.394363   18059 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0805 22:51:01.397262   18059 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0805 22:51:01.808485   18059 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0805 22:51:01.884527   18059 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0805 22:51:01.902309   18059 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0805 22:51:01.902588   18059 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0805 22:51:02.306605   18059 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0805 22:51:02.383639   18059 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0805 22:51:02.394871   18059 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0805 22:51:02.396821   18059 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0805 22:51:02.806240   18059 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0805 22:51:02.884561   18059 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0805 22:51:02.894971   18059 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0805 22:51:02.898247   18059 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0805 22:51:03.306214   18059 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0805 22:51:03.384310   18059 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0805 22:51:03.394216   18059 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0805 22:51:03.397170   18059 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0805 22:51:03.805860   18059 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0805 22:51:03.884825   18059 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0805 22:51:03.894312   18059 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0805 22:51:03.896729   18059 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0805 22:51:04.305494   18059 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0805 22:51:04.384412   18059 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0805 22:51:04.394998   18059 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0805 22:51:04.397049   18059 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0805 22:51:04.806006   18059 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0805 22:51:04.884064   18059 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0805 22:51:04.896044   18059 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0805 22:51:04.897845   18059 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0805 22:51:05.341303   18059 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0805 22:51:05.385348   18059 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0805 22:51:05.395810   18059 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0805 22:51:05.399777   18059 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0805 22:51:05.806575   18059 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0805 22:51:05.883549   18059 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0805 22:51:05.894481   18059 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0805 22:51:05.897454   18059 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0805 22:51:06.306225   18059 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0805 22:51:06.384643   18059 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0805 22:51:06.394734   18059 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0805 22:51:06.397597   18059 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0805 22:51:06.806520   18059 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0805 22:51:06.883665   18059 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0805 22:51:06.894596   18059 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0805 22:51:06.897788   18059 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0805 22:51:07.305277   18059 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0805 22:51:07.384022   18059 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0805 22:51:07.395206   18059 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0805 22:51:07.397407   18059 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0805 22:51:07.806364   18059 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0805 22:51:07.884540   18059 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0805 22:51:07.898444   18059 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0805 22:51:07.899788   18059 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0805 22:51:08.612555   18059 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0805 22:51:08.613305   18059 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0805 22:51:08.614720   18059 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0805 22:51:08.614854   18059 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0805 22:51:08.806474   18059 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0805 22:51:08.884429   18059 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0805 22:51:08.895315   18059 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0805 22:51:08.898310   18059 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0805 22:51:09.305704   18059 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0805 22:51:09.385670   18059 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0805 22:51:09.401729   18059 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0805 22:51:09.402064   18059 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0805 22:51:09.806476   18059 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0805 22:51:09.884497   18059 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0805 22:51:09.896336   18059 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0805 22:51:09.898435   18059 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0805 22:51:10.305702   18059 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0805 22:51:10.384393   18059 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0805 22:51:10.395517   18059 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0805 22:51:10.398100   18059 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0805 22:51:10.809398   18059 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0805 22:51:10.885465   18059 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0805 22:51:10.894437   18059 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0805 22:51:10.896692   18059 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0805 22:51:11.305941   18059 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0805 22:51:11.385034   18059 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0805 22:51:11.395883   18059 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0805 22:51:11.398653   18059 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0805 22:51:11.805999   18059 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0805 22:51:11.886682   18059 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0805 22:51:11.895544   18059 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0805 22:51:11.898824   18059 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0805 22:51:12.306306   18059 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0805 22:51:12.384435   18059 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0805 22:51:12.394167   18059 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0805 22:51:12.397768   18059 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0805 22:51:12.806522   18059 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0805 22:51:12.893284   18059 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0805 22:51:12.894868   18059 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0805 22:51:12.896949   18059 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0805 22:51:13.305381   18059 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0805 22:51:13.384501   18059 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0805 22:51:13.397850   18059 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0805 22:51:13.399347   18059 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0805 22:51:13.806393   18059 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0805 22:51:13.888535   18059 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0805 22:51:13.898330   18059 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0805 22:51:13.903215   18059 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0805 22:51:14.305707   18059 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0805 22:51:14.388878   18059 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0805 22:51:14.395229   18059 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0805 22:51:14.397409   18059 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0805 22:51:14.805711   18059 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0805 22:51:14.883785   18059 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0805 22:51:14.894796   18059 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0805 22:51:14.897125   18059 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0805 22:51:15.305414   18059 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0805 22:51:15.385853   18059 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0805 22:51:15.398056   18059 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0805 22:51:15.403305   18059 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0805 22:51:15.806066   18059 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0805 22:51:15.884969   18059 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0805 22:51:15.894640   18059 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0805 22:51:15.897551   18059 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0805 22:51:16.306151   18059 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0805 22:51:16.385209   18059 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0805 22:51:16.395404   18059 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0805 22:51:16.398952   18059 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0805 22:51:16.805488   18059 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0805 22:51:16.885464   18059 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0805 22:51:16.894695   18059 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0805 22:51:16.897200   18059 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0805 22:51:17.305736   18059 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0805 22:51:17.384034   18059 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0805 22:51:17.396956   18059 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0805 22:51:17.401186   18059 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0805 22:51:17.805246   18059 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0805 22:51:17.884517   18059 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0805 22:51:17.894133   18059 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0805 22:51:17.912224   18059 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0805 22:51:18.305247   18059 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0805 22:51:18.384624   18059 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0805 22:51:18.394587   18059 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0805 22:51:18.397099   18059 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0805 22:51:19.037271   18059 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0805 22:51:19.049127   18059 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0805 22:51:19.049538   18059 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0805 22:51:19.049735   18059 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0805 22:51:19.305888   18059 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0805 22:51:19.390106   18059 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0805 22:51:19.395313   18059 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0805 22:51:19.400097   18059 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0805 22:51:19.807446   18059 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0805 22:51:19.885778   18059 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0805 22:51:19.894711   18059 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0805 22:51:19.897488   18059 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0805 22:51:20.305572   18059 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0805 22:51:20.383749   18059 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0805 22:51:20.394706   18059 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0805 22:51:20.397335   18059 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0805 22:51:20.806240   18059 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0805 22:51:20.883812   18059 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0805 22:51:20.897744   18059 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0805 22:51:20.898616   18059 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0805 22:51:21.305809   18059 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0805 22:51:21.383876   18059 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0805 22:51:21.394573   18059 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0805 22:51:21.397445   18059 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0805 22:51:21.806743   18059 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0805 22:51:21.883905   18059 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0805 22:51:21.894656   18059 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0805 22:51:21.897620   18059 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0805 22:51:22.307789   18059 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0805 22:51:22.387150   18059 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0805 22:51:22.395521   18059 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0805 22:51:22.399635   18059 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0805 22:51:22.805432   18059 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0805 22:51:22.885431   18059 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0805 22:51:22.894200   18059 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0805 22:51:22.896770   18059 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0805 22:51:23.306572   18059 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0805 22:51:23.384162   18059 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0805 22:51:23.394312   18059 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0805 22:51:23.397350   18059 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0805 22:51:23.805560   18059 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0805 22:51:23.883998   18059 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0805 22:51:23.895242   18059 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0805 22:51:23.897791   18059 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0805 22:51:24.305410   18059 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0805 22:51:24.390389   18059 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0805 22:51:24.402620   18059 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0805 22:51:24.403548   18059 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0805 22:51:24.806315   18059 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0805 22:51:24.885108   18059 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0805 22:51:24.896347   18059 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0805 22:51:24.898601   18059 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0805 22:51:25.306200   18059 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0805 22:51:25.383864   18059 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0805 22:51:25.394399   18059 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0805 22:51:25.396878   18059 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0805 22:51:25.806331   18059 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0805 22:51:25.884373   18059 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0805 22:51:25.894336   18059 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0805 22:51:25.897200   18059 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0805 22:51:26.305705   18059 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0805 22:51:26.383474   18059 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0805 22:51:26.394150   18059 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0805 22:51:26.396775   18059 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0805 22:51:26.805259   18059 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0805 22:51:26.884164   18059 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0805 22:51:26.894284   18059 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0805 22:51:26.897709   18059 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0805 22:51:27.306208   18059 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0805 22:51:27.384186   18059 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0805 22:51:27.394719   18059 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0805 22:51:27.398107   18059 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0805 22:51:27.806016   18059 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0805 22:51:27.884377   18059 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0805 22:51:27.895410   18059 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0805 22:51:27.897724   18059 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0805 22:51:28.306244   18059 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0805 22:51:28.384230   18059 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0805 22:51:28.395182   18059 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0805 22:51:28.397617   18059 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0805 22:51:28.805976   18059 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0805 22:51:28.884587   18059 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0805 22:51:28.894467   18059 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0805 22:51:28.897541   18059 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0805 22:51:29.314019   18059 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0805 22:51:29.384655   18059 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0805 22:51:29.397912   18059 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0805 22:51:29.400460   18059 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0805 22:51:29.806481   18059 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0805 22:51:29.885308   18059 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0805 22:51:29.894811   18059 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0805 22:51:29.899335   18059 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0805 22:51:30.305822   18059 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0805 22:51:30.384080   18059 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0805 22:51:30.394942   18059 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0805 22:51:30.397585   18059 kapi.go:107] duration metric: took 37.504870684s to wait for kubernetes.io/minikube-addons=registry ...
	I0805 22:51:30.807376   18059 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0805 22:51:30.885233   18059 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0805 22:51:30.894780   18059 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0805 22:51:31.313958   18059 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0805 22:51:31.384121   18059 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0805 22:51:31.394513   18059 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0805 22:51:31.807888   18059 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0805 22:51:32.359569   18059 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0805 22:51:32.359775   18059 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0805 22:51:32.362224   18059 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0805 22:51:32.385144   18059 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0805 22:51:32.395115   18059 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0805 22:51:32.805627   18059 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0805 22:51:32.883314   18059 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0805 22:51:32.894292   18059 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0805 22:51:33.305479   18059 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0805 22:51:33.385579   18059 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0805 22:51:33.394246   18059 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0805 22:51:33.805943   18059 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0805 22:51:33.885803   18059 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0805 22:51:33.894437   18059 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0805 22:51:34.305814   18059 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0805 22:51:34.384002   18059 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0805 22:51:34.394560   18059 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0805 22:51:34.807444   18059 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0805 22:51:34.887484   18059 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0805 22:51:34.895801   18059 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0805 22:51:35.307080   18059 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0805 22:51:35.384644   18059 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0805 22:51:35.395485   18059 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0805 22:51:35.805383   18059 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0805 22:51:35.884967   18059 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0805 22:51:35.894867   18059 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0805 22:51:36.305982   18059 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0805 22:51:36.384314   18059 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0805 22:51:36.394531   18059 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0805 22:51:36.805744   18059 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0805 22:51:36.884074   18059 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0805 22:51:36.895116   18059 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0805 22:51:37.304897   18059 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0805 22:51:37.383852   18059 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0805 22:51:37.394243   18059 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0805 22:51:37.805883   18059 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0805 22:51:37.884517   18059 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0805 22:51:37.895892   18059 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0805 22:51:38.309940   18059 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0805 22:51:38.384638   18059 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0805 22:51:38.395478   18059 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0805 22:51:38.805738   18059 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0805 22:51:38.884458   18059 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0805 22:51:38.894864   18059 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0805 22:51:39.306344   18059 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0805 22:51:39.391115   18059 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0805 22:51:39.394841   18059 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0805 22:51:39.805822   18059 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0805 22:51:39.883781   18059 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0805 22:51:39.894822   18059 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0805 22:51:40.305893   18059 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0805 22:51:40.383837   18059 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0805 22:51:40.398728   18059 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0805 22:51:40.806976   18059 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0805 22:51:40.884613   18059 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0805 22:51:40.895114   18059 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0805 22:51:41.305663   18059 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0805 22:51:41.383595   18059 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0805 22:51:41.394650   18059 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0805 22:51:41.807174   18059 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0805 22:51:41.884087   18059 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0805 22:51:41.894695   18059 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0805 22:51:42.306238   18059 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0805 22:51:42.384375   18059 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0805 22:51:42.394457   18059 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0805 22:51:42.805340   18059 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0805 22:51:42.884910   18059 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0805 22:51:42.894508   18059 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0805 22:51:43.305576   18059 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0805 22:51:43.383200   18059 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0805 22:51:43.395061   18059 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0805 22:51:43.806098   18059 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0805 22:51:43.883258   18059 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0805 22:51:43.894220   18059 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0805 22:51:44.305942   18059 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0805 22:51:44.384162   18059 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0805 22:51:44.394656   18059 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0805 22:51:44.806810   18059 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0805 22:51:44.883693   18059 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0805 22:51:44.895213   18059 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0805 22:51:45.306833   18059 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0805 22:51:45.384061   18059 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0805 22:51:45.394899   18059 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0805 22:51:45.807504   18059 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0805 22:51:46.129855   18059 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0805 22:51:46.131947   18059 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0805 22:51:46.305715   18059 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0805 22:51:46.383716   18059 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0805 22:51:46.394577   18059 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0805 22:51:46.806002   18059 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0805 22:51:46.884346   18059 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0805 22:51:46.894379   18059 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0805 22:51:47.306586   18059 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0805 22:51:47.383251   18059 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0805 22:51:47.394234   18059 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0805 22:51:47.805720   18059 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0805 22:51:48.082709   18059 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0805 22:51:48.087609   18059 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0805 22:51:48.306776   18059 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0805 22:51:48.384401   18059 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0805 22:51:48.394160   18059 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0805 22:51:48.808404   18059 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0805 22:51:48.883685   18059 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0805 22:51:48.895691   18059 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0805 22:51:49.306876   18059 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0805 22:51:49.385465   18059 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0805 22:51:49.394231   18059 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0805 22:51:49.805160   18059 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0805 22:51:49.887676   18059 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0805 22:51:49.894784   18059 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0805 22:51:50.310908   18059 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0805 22:51:50.383748   18059 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0805 22:51:50.394639   18059 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0805 22:51:50.805239   18059 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0805 22:51:50.884114   18059 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0805 22:51:50.893793   18059 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0805 22:51:51.305672   18059 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0805 22:51:51.383602   18059 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0805 22:51:51.394641   18059 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0805 22:51:51.805988   18059 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0805 22:51:51.887536   18059 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0805 22:51:51.902685   18059 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0805 22:51:52.306039   18059 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0805 22:51:52.384024   18059 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0805 22:51:52.394623   18059 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0805 22:51:52.805811   18059 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0805 22:51:52.884179   18059 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0805 22:51:52.894969   18059 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0805 22:51:53.305733   18059 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0805 22:51:53.383078   18059 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0805 22:51:53.395070   18059 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0805 22:51:53.805951   18059 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0805 22:51:53.887517   18059 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0805 22:51:53.895745   18059 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0805 22:51:54.305340   18059 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0805 22:51:54.384330   18059 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0805 22:51:54.393760   18059 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0805 22:51:54.805872   18059 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0805 22:51:54.883612   18059 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0805 22:51:54.894660   18059 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0805 22:51:55.305886   18059 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0805 22:51:55.383765   18059 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0805 22:51:55.398614   18059 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0805 22:51:55.806221   18059 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0805 22:51:55.884335   18059 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0805 22:51:55.896724   18059 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0805 22:51:56.306877   18059 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0805 22:51:56.385288   18059 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0805 22:51:56.394114   18059 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0805 22:51:56.806187   18059 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0805 22:51:56.885864   18059 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0805 22:51:56.894560   18059 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0805 22:51:57.305681   18059 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0805 22:51:57.383579   18059 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0805 22:51:57.394190   18059 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0805 22:51:57.818796   18059 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0805 22:51:57.890635   18059 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0805 22:51:57.896128   18059 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0805 22:51:58.306682   18059 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0805 22:51:58.383354   18059 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0805 22:51:58.397160   18059 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0805 22:51:58.806015   18059 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0805 22:51:58.885013   18059 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0805 22:51:58.901034   18059 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0805 22:51:59.308018   18059 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0805 22:51:59.390002   18059 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0805 22:51:59.395691   18059 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0805 22:51:59.805572   18059 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0805 22:51:59.884508   18059 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0805 22:51:59.900317   18059 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0805 22:52:00.305127   18059 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0805 22:52:00.384436   18059 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0805 22:52:00.395727   18059 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0805 22:52:00.805856   18059 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0805 22:52:00.883890   18059 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0805 22:52:00.895013   18059 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0805 22:52:01.306318   18059 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0805 22:52:01.384006   18059 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0805 22:52:01.395015   18059 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0805 22:52:01.808363   18059 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0805 22:52:01.885672   18059 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0805 22:52:01.897782   18059 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0805 22:52:02.310374   18059 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0805 22:52:02.385578   18059 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0805 22:52:02.394611   18059 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0805 22:52:02.805531   18059 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0805 22:52:02.887489   18059 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0805 22:52:02.894862   18059 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0805 22:52:03.305375   18059 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0805 22:52:03.384177   18059 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0805 22:52:03.394997   18059 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0805 22:52:03.805843   18059 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0805 22:52:03.884560   18059 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0805 22:52:03.895411   18059 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0805 22:52:04.306689   18059 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0805 22:52:04.383989   18059 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0805 22:52:04.395551   18059 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0805 22:52:04.807060   18059 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0805 22:52:04.883702   18059 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0805 22:52:04.896549   18059 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0805 22:52:05.306335   18059 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0805 22:52:05.384684   18059 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0805 22:52:05.394251   18059 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0805 22:52:05.806270   18059 kapi.go:107] duration metric: took 1m10.004521145s to wait for kubernetes.io/minikube-addons=gcp-auth ...
	I0805 22:52:05.808431   18059 out.go:177] * Your GCP credentials will now be mounted into every pod created in the addons-435364 cluster.
	I0805 22:52:05.810077   18059 out.go:177] * If you don't want your credentials mounted into a specific pod, add a label with the `gcp-auth-skip-secret` key to your pod configuration.
	I0805 22:52:05.811722   18059 out.go:177] * If you want existing pods to be mounted with credentials, either recreate them or rerun addons enable with --refresh.
	I0805 22:52:05.884206   18059 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0805 22:52:05.894286   18059 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0805 22:52:06.384605   18059 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0805 22:52:06.394931   18059 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0805 22:52:06.883342   18059 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0805 22:52:06.894006   18059 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0805 22:52:07.383367   18059 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0805 22:52:07.394125   18059 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0805 22:52:07.884372   18059 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0805 22:52:07.894024   18059 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0805 22:52:08.383239   18059 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0805 22:52:08.397905   18059 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0805 22:52:08.888204   18059 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0805 22:52:08.898670   18059 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0805 22:52:09.383352   18059 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0805 22:52:09.393961   18059 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0805 22:52:09.883846   18059 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0805 22:52:09.895117   18059 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0805 22:52:10.384131   18059 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0805 22:52:10.394894   18059 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0805 22:52:10.884399   18059 kapi.go:107] duration metric: took 1m17.00612132s to wait for kubernetes.io/minikube-addons=csi-hostpath-driver ...
	I0805 22:52:10.895660   18059 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0805 22:52:11.394819   18059 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0805 22:52:11.895020   18059 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0805 22:52:12.394588   18059 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0805 22:52:12.896109   18059 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0805 22:52:13.394917   18059 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0805 22:52:13.894462   18059 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0805 22:52:14.395784   18059 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0805 22:52:14.895594   18059 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0805 22:52:15.395321   18059 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0805 22:52:15.896618   18059 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0805 22:52:16.394467   18059 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0805 22:52:16.896216   18059 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0805 22:52:17.395260   18059 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0805 22:52:17.895321   18059 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0805 22:52:18.395383   18059 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0805 22:52:18.896662   18059 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0805 22:52:19.395617   18059 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0805 22:52:19.895654   18059 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0805 22:52:20.395246   18059 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0805 22:52:20.895134   18059 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0805 22:52:21.394987   18059 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0805 22:52:21.895183   18059 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0805 22:52:22.396341   18059 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0805 22:52:22.895517   18059 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0805 22:52:23.395590   18059 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0805 22:52:23.895734   18059 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0805 22:52:24.394627   18059 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0805 22:52:24.894767   18059 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0805 22:52:25.394793   18059 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0805 22:52:25.894982   18059 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0805 22:52:26.397326   18059 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0805 22:52:26.895154   18059 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0805 22:52:27.394989   18059 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0805 22:52:27.894472   18059 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0805 22:52:28.395384   18059 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0805 22:52:28.896093   18059 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0805 22:52:29.395345   18059 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0805 22:52:29.895178   18059 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0805 22:52:30.395134   18059 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0805 22:52:30.895654   18059 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0805 22:52:31.396110   18059 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0805 22:52:31.896298   18059 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0805 22:52:32.395262   18059 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0805 22:52:32.897391   18059 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0805 22:52:33.395534   18059 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0805 22:52:33.900137   18059 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0805 22:52:34.394721   18059 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0805 22:52:34.894814   18059 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0805 22:52:35.394753   18059 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0805 22:52:35.894680   18059 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0805 22:52:36.395539   18059 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0805 22:52:36.896620   18059 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0805 22:52:37.396457   18059 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0805 22:52:37.895646   18059 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0805 22:52:38.400663   18059 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0805 22:52:38.895311   18059 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0805 22:52:39.395320   18059 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0805 22:52:39.894839   18059 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0805 22:52:40.394210   18059 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0805 22:52:40.895893   18059 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0805 22:52:41.394834   18059 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0805 22:52:41.894918   18059 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0805 22:52:42.395350   18059 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0805 22:52:42.895411   18059 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0805 22:52:43.395832   18059 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0805 22:52:43.895494   18059 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0805 22:52:44.395390   18059 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0805 22:52:44.895255   18059 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0805 22:52:45.395475   18059 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0805 22:52:45.896181   18059 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0805 22:52:46.395501   18059 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0805 22:52:46.896480   18059 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0805 22:52:47.395515   18059 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0805 22:52:47.896059   18059 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0805 22:52:48.395520   18059 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0805 22:52:48.895063   18059 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0805 22:52:49.395490   18059 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0805 22:52:49.895735   18059 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0805 22:52:50.395484   18059 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0805 22:52:50.896322   18059 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0805 22:52:51.396308   18059 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0805 22:52:51.897273   18059 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0805 22:52:52.397243   18059 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0805 22:52:52.895585   18059 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0805 22:52:53.396134   18059 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0805 22:52:53.895692   18059 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0805 22:52:54.394595   18059 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0805 22:52:54.895406   18059 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0805 22:52:55.395412   18059 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0805 22:52:55.896554   18059 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0805 22:52:56.395473   18059 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0805 22:52:56.895670   18059 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0805 22:52:57.394295   18059 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0805 22:52:57.895319   18059 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0805 22:52:58.396723   18059 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0805 22:52:58.895672   18059 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0805 22:52:59.396206   18059 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0805 22:52:59.895335   18059 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0805 22:53:00.399317   18059 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0805 22:53:00.895510   18059 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0805 22:53:01.395903   18059 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0805 22:53:01.896217   18059 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0805 22:53:02.397569   18059 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0805 22:53:02.896826   18059 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0805 22:53:03.394988   18059 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0805 22:53:03.895818   18059 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0805 22:53:04.395455   18059 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0805 22:53:04.895831   18059 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0805 22:53:05.395761   18059 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0805 22:53:05.894988   18059 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0805 22:53:06.395045   18059 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0805 22:53:06.895066   18059 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0805 22:53:07.395251   18059 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0805 22:53:07.895282   18059 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0805 22:53:08.395925   18059 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0805 22:53:08.896715   18059 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0805 22:53:09.395483   18059 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0805 22:53:09.895510   18059 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0805 22:53:10.395245   18059 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0805 22:53:10.894313   18059 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0805 22:53:11.831692   18059 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0805 22:53:11.895687   18059 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0805 22:53:12.395448   18059 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0805 22:53:12.896912   18059 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0805 22:53:13.394476   18059 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0805 22:53:13.896393   18059 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0805 22:53:14.549979   18059 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0805 22:53:14.896068   18059 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0805 22:53:15.396987   18059 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0805 22:53:15.895327   18059 kapi.go:107] duration metric: took 2m23.004955809s to wait for app.kubernetes.io/name=ingress-nginx ...
	I0805 22:53:15.897040   18059 out.go:177] * Enabled addons: ingress-dns, storage-provisioner, storage-provisioner-rancher, helm-tiller, metrics-server, inspektor-gadget, nvidia-device-plugin, cloud-spanner, yakd, volumesnapshots, registry, gcp-auth, csi-hostpath-driver, ingress
	I0805 22:53:15.898514   18059 addons.go:510] duration metric: took 2m32.221311776s for enable addons: enabled=[ingress-dns storage-provisioner storage-provisioner-rancher helm-tiller metrics-server inspektor-gadget nvidia-device-plugin cloud-spanner yakd volumesnapshots registry gcp-auth csi-hostpath-driver ingress]
	I0805 22:53:15.898554   18059 start.go:246] waiting for cluster config update ...
	I0805 22:53:15.898577   18059 start.go:255] writing updated cluster config ...
	I0805 22:53:15.898818   18059 ssh_runner.go:195] Run: rm -f paused
	I0805 22:53:15.950673   18059 start.go:600] kubectl: 1.30.3, cluster: 1.30.3 (minor skew: 0)
	I0805 22:53:15.952813   18059 out.go:177] * Done! kubectl is now configured to use "addons-435364" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	Aug 05 22:59:41 addons-435364 crio[676]: time="2024-08-05 22:59:41.664366382Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1722898781664338415,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:589584,},InodesUsed:&UInt64Value{Value:210,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=0548fdcc-36d1-4471-80f1-6fd57e340576 name=/runtime.v1.ImageService/ImageFsInfo
	Aug 05 22:59:41 addons-435364 crio[676]: time="2024-08-05 22:59:41.665087303Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=09f917df-e67f-464c-bc9c-af9e24185228 name=/runtime.v1.RuntimeService/ListContainers
	Aug 05 22:59:41 addons-435364 crio[676]: time="2024-08-05 22:59:41.665156985Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=09f917df-e67f-464c-bc9c-af9e24185228 name=/runtime.v1.RuntimeService/ListContainers
	Aug 05 22:59:41 addons-435364 crio[676]: time="2024-08-05 22:59:41.665408548Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:9dce877f2c362a263adc22fa7c1dff8aa7deca2278b49b2cc88d482a8b6b4d04,PodSandboxId:8fd827d106ec2d0907c305fa69adb920d81b4078582840c0168b625b01ffa0a0,Metadata:&ContainerMetadata{Name:hello-world-app,Attempt:0,},Image:&ImageSpec{Image:docker.io/kicbase/echo-server@sha256:127ac38a2bb9537b7f252addff209ea6801edcac8a92c8b1104dacd66a583ed6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9056ab77afb8e18e04303f11000a9d31b3f16b74c59475b899ae1b342d328d30,State:CONTAINER_RUNNING,CreatedAt:1722898627258550448,Labels:map[string]string{io.kubernetes.container.name: hello-world-app,io.kubernetes.pod.name: hello-world-app-6778b5fc9f-nbsh9,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 18dc8ba2-00d5-49a3-891c-7e66fff40039,},Annotations:map[string]string{io.kubernetes.container.hash: 5f2dd573,io.kubernetes.container.
ports: [{\"containerPort\":8080,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:30e682946ae982b910587c4dfd32ee4b18fb9be6ffc0c0ed2c73c3bcaccab5b3,PodSandboxId:191ee1227f0613fe15909c9265bcdf71f2df55c0514d55c7c442ab0cc2dd6591,Metadata:&ContainerMetadata{Name:nginx,Attempt:0,},Image:&ImageSpec{Image:docker.io/library/nginx@sha256:208b70eefac13ee9be00e486f79c695b15cef861c680527171a27d253d834be9,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1ae23480369fa4139f6dec668d7a5a941b56ea174e9cf75e09771988fe621c95,State:CONTAINER_RUNNING,CreatedAt:1722898488055385763,Labels:map[string]string{io.kubernetes.container.name: nginx,io.kubernetes.pod.name: nginx,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: f96f1bbf-3982-41a3-94f0-5cab0827ddb3,},Annotations:map[string]string{io.kubernet
es.container.hash: cfbed574,io.kubernetes.container.ports: [{\"containerPort\":80,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:279179149c33b3043b28d0af3a8612082ccf6cd6248319270f1dfdc7fc567211,PodSandboxId:3231f54f5336a24e2ef0cf19c8327249775ae9e3c236f930ed00e3ef1110ed36,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c,State:CONTAINER_RUNNING,CreatedAt:1722898399559693191,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 973bc6a0-8c5b-48a5-a
795-cc389f59d219,},Annotations:map[string]string{io.kubernetes.container.hash: 2b7f493d,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:431e42b0b0b158402f10cb7b93827a107055987140e4dce351b570dc3f93facd,PodSandboxId:b020eb62a7b0f24faa5795c0c3d3869f774509a7eecdb34c96cfb5f299c3babf,Metadata:&ContainerMetadata{Name:metrics-server,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/metrics-server/metrics-server@sha256:31f034feb3f16062e93be7c40efc596553c89de172e2e412e588f02382388872,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a24c7c057ec8730aaa152f77366454835a46dc699fcf243698a622788fd48d62,State:CONTAINER_RUNNING,CreatedAt:1722898302555177776,Labels:map[string]string{io.kubernetes.container.name: metrics-server,io.kubernetes.pod.name: metrics-server-c59844bb4-m9t52,io.kubernetes.pod.namespace: kube-system,i
o.kubernetes.pod.uid: f825462d-de15-4aa7-9436-76eda3bbd66f,},Annotations:map[string]string{io.kubernetes.container.hash: 62e4cceb,io.kubernetes.container.ports: [{\"name\":\"https\",\"containerPort\":4443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7b5c994323214402a42053a26dbdf6aaa73eeb251beee1a898876e1c323893d5,PodSandboxId:3129664cc0275e797e40cebd7629f9013b4ad7642ad0896ad1cef672c78146a5,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1722898250327121182,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,
io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: cfbc5ee9-491f-4c8d-aecc-72ba061092ec,},Annotations:map[string]string{io.kubernetes.container.hash: 1c0a402c,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0abdbd3ed10f077f41965f1ab420f42938621e8dbc61df531790ac2ee7e9c40e,PodSandboxId:1055162b97d8516c24dc2a85ed57c9facac9405e2411fd71e3390b11f0b160b4,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1722898246723247613,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6
d8ff4d-ng8rk,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2091f1e9-b1aa-45fd-8197-0f661fcf784e,},Annotations:map[string]string{io.kubernetes.container.hash: 89f0555d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ffd14a580eef1dd67f8e26cf09eeb41251619feba45e4ab0d12f7f5b32879188,PodSandboxId:22bb8d21f29a210bd60addbae54caa6a518370f2cb4e18a6e41d2c21019b1d38,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:55bb025d2cfa592b9381
d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,State:CONTAINER_RUNNING,CreatedAt:1722898243472744671,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-lt8r2,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c1a7c99c-379f-4e2d-b241-4de97adffa76,},Annotations:map[string]string{io.kubernetes.container.hash: 3ab3dbc6,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e58d0c10af25f73f245cd49ac44d141e0b4dc75e8e4ac8995698b79ed373af5e,PodSandboxId:961be72ba0eb869239d98780834acef5b053ceeed32f94be162b3be2cf91ec70,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e
5ce67bc9a9795e2,State:CONTAINER_RUNNING,CreatedAt:1722898224416768929,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-addons-435364,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 321c366bd160eeee564705797a7fc2fc,},Annotations:map[string]string{io.kubernetes.container.hash: 7337c8d9,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:de461982723232193cc406adb03555f3314162eaba4b5e3472d116ab53272189,PodSandboxId:b4dfc4b300ffc8791cc6d909cd97644db5094407db26e8ee6de5b4357f14ce25,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTA
INER_RUNNING,CreatedAt:1722898224450036379,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-addons-435364,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0d200a2d8f14313b20affd7e51da4716,},Annotations:map[string]string{io.kubernetes.container.hash: f267c287,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b5b169a97f6f0fee85e8a3c58958ef344c63040a0d46d50b287ab5277d491e7d,PodSandboxId:f3b9318379ac35b248f9d1a079b0c94d03813100d40a7289914625df00dcf608,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,State:CONTAINER_RUNNING,CreatedAt:17228
98224463679631,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-addons-435364,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 18812a5d71e8307dfae178321f661472,},Annotations:map[string]string{io.kubernetes.container.hash: bcb4918f,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:92eafd2fe5370e20300cf4b57a5758e16e3dee2bb64c465c25b601d07f7aa4c6,PodSandboxId:fef94c54938c430ddc6f396f0cac092b131d58fcf51ba251606475e1c80854d1,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,State:CONTAINER_RUNNING,CreatedAt:1722
898224394938513,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-addons-435364,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3913d430a2d94646f23a316dc2057cb3,},Annotations:map[string]string{io.kubernetes.container.hash: 9ffc2af7,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=09f917df-e67f-464c-bc9c-af9e24185228 name=/runtime.v1.RuntimeService/ListContainers
	Aug 05 22:59:41 addons-435364 crio[676]: time="2024-08-05 22:59:41.702098160Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=11ac9fa8-5f88-4c33-8cb0-bbb4b24739e8 name=/runtime.v1.RuntimeService/Version
	Aug 05 22:59:41 addons-435364 crio[676]: time="2024-08-05 22:59:41.702212333Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=11ac9fa8-5f88-4c33-8cb0-bbb4b24739e8 name=/runtime.v1.RuntimeService/Version
	Aug 05 22:59:41 addons-435364 crio[676]: time="2024-08-05 22:59:41.703488691Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=1162cf37-d9a1-47fe-96a5-a174885ce4c8 name=/runtime.v1.ImageService/ImageFsInfo
	Aug 05 22:59:41 addons-435364 crio[676]: time="2024-08-05 22:59:41.704904352Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1722898781704875609,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:589584,},InodesUsed:&UInt64Value{Value:210,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=1162cf37-d9a1-47fe-96a5-a174885ce4c8 name=/runtime.v1.ImageService/ImageFsInfo
	Aug 05 22:59:41 addons-435364 crio[676]: time="2024-08-05 22:59:41.705566904Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=ab2d2dcd-e83c-49b0-ad9f-30429fc753d5 name=/runtime.v1.RuntimeService/ListContainers
	Aug 05 22:59:41 addons-435364 crio[676]: time="2024-08-05 22:59:41.705682159Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=ab2d2dcd-e83c-49b0-ad9f-30429fc753d5 name=/runtime.v1.RuntimeService/ListContainers
	Aug 05 22:59:41 addons-435364 crio[676]: time="2024-08-05 22:59:41.706190572Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:9dce877f2c362a263adc22fa7c1dff8aa7deca2278b49b2cc88d482a8b6b4d04,PodSandboxId:8fd827d106ec2d0907c305fa69adb920d81b4078582840c0168b625b01ffa0a0,Metadata:&ContainerMetadata{Name:hello-world-app,Attempt:0,},Image:&ImageSpec{Image:docker.io/kicbase/echo-server@sha256:127ac38a2bb9537b7f252addff209ea6801edcac8a92c8b1104dacd66a583ed6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9056ab77afb8e18e04303f11000a9d31b3f16b74c59475b899ae1b342d328d30,State:CONTAINER_RUNNING,CreatedAt:1722898627258550448,Labels:map[string]string{io.kubernetes.container.name: hello-world-app,io.kubernetes.pod.name: hello-world-app-6778b5fc9f-nbsh9,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 18dc8ba2-00d5-49a3-891c-7e66fff40039,},Annotations:map[string]string{io.kubernetes.container.hash: 5f2dd573,io.kubernetes.container.
ports: [{\"containerPort\":8080,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:30e682946ae982b910587c4dfd32ee4b18fb9be6ffc0c0ed2c73c3bcaccab5b3,PodSandboxId:191ee1227f0613fe15909c9265bcdf71f2df55c0514d55c7c442ab0cc2dd6591,Metadata:&ContainerMetadata{Name:nginx,Attempt:0,},Image:&ImageSpec{Image:docker.io/library/nginx@sha256:208b70eefac13ee9be00e486f79c695b15cef861c680527171a27d253d834be9,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1ae23480369fa4139f6dec668d7a5a941b56ea174e9cf75e09771988fe621c95,State:CONTAINER_RUNNING,CreatedAt:1722898488055385763,Labels:map[string]string{io.kubernetes.container.name: nginx,io.kubernetes.pod.name: nginx,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: f96f1bbf-3982-41a3-94f0-5cab0827ddb3,},Annotations:map[string]string{io.kubernet
es.container.hash: cfbed574,io.kubernetes.container.ports: [{\"containerPort\":80,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:279179149c33b3043b28d0af3a8612082ccf6cd6248319270f1dfdc7fc567211,PodSandboxId:3231f54f5336a24e2ef0cf19c8327249775ae9e3c236f930ed00e3ef1110ed36,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c,State:CONTAINER_RUNNING,CreatedAt:1722898399559693191,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 973bc6a0-8c5b-48a5-a
795-cc389f59d219,},Annotations:map[string]string{io.kubernetes.container.hash: 2b7f493d,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:431e42b0b0b158402f10cb7b93827a107055987140e4dce351b570dc3f93facd,PodSandboxId:b020eb62a7b0f24faa5795c0c3d3869f774509a7eecdb34c96cfb5f299c3babf,Metadata:&ContainerMetadata{Name:metrics-server,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/metrics-server/metrics-server@sha256:31f034feb3f16062e93be7c40efc596553c89de172e2e412e588f02382388872,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a24c7c057ec8730aaa152f77366454835a46dc699fcf243698a622788fd48d62,State:CONTAINER_RUNNING,CreatedAt:1722898302555177776,Labels:map[string]string{io.kubernetes.container.name: metrics-server,io.kubernetes.pod.name: metrics-server-c59844bb4-m9t52,io.kubernetes.pod.namespace: kube-system,i
o.kubernetes.pod.uid: f825462d-de15-4aa7-9436-76eda3bbd66f,},Annotations:map[string]string{io.kubernetes.container.hash: 62e4cceb,io.kubernetes.container.ports: [{\"name\":\"https\",\"containerPort\":4443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7b5c994323214402a42053a26dbdf6aaa73eeb251beee1a898876e1c323893d5,PodSandboxId:3129664cc0275e797e40cebd7629f9013b4ad7642ad0896ad1cef672c78146a5,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1722898250327121182,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,
io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: cfbc5ee9-491f-4c8d-aecc-72ba061092ec,},Annotations:map[string]string{io.kubernetes.container.hash: 1c0a402c,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0abdbd3ed10f077f41965f1ab420f42938621e8dbc61df531790ac2ee7e9c40e,PodSandboxId:1055162b97d8516c24dc2a85ed57c9facac9405e2411fd71e3390b11f0b160b4,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1722898246723247613,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6
d8ff4d-ng8rk,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2091f1e9-b1aa-45fd-8197-0f661fcf784e,},Annotations:map[string]string{io.kubernetes.container.hash: 89f0555d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ffd14a580eef1dd67f8e26cf09eeb41251619feba45e4ab0d12f7f5b32879188,PodSandboxId:22bb8d21f29a210bd60addbae54caa6a518370f2cb4e18a6e41d2c21019b1d38,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:55bb025d2cfa592b9381
d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,State:CONTAINER_RUNNING,CreatedAt:1722898243472744671,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-lt8r2,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c1a7c99c-379f-4e2d-b241-4de97adffa76,},Annotations:map[string]string{io.kubernetes.container.hash: 3ab3dbc6,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e58d0c10af25f73f245cd49ac44d141e0b4dc75e8e4ac8995698b79ed373af5e,PodSandboxId:961be72ba0eb869239d98780834acef5b053ceeed32f94be162b3be2cf91ec70,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e
5ce67bc9a9795e2,State:CONTAINER_RUNNING,CreatedAt:1722898224416768929,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-addons-435364,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 321c366bd160eeee564705797a7fc2fc,},Annotations:map[string]string{io.kubernetes.container.hash: 7337c8d9,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:de461982723232193cc406adb03555f3314162eaba4b5e3472d116ab53272189,PodSandboxId:b4dfc4b300ffc8791cc6d909cd97644db5094407db26e8ee6de5b4357f14ce25,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTA
INER_RUNNING,CreatedAt:1722898224450036379,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-addons-435364,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0d200a2d8f14313b20affd7e51da4716,},Annotations:map[string]string{io.kubernetes.container.hash: f267c287,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b5b169a97f6f0fee85e8a3c58958ef344c63040a0d46d50b287ab5277d491e7d,PodSandboxId:f3b9318379ac35b248f9d1a079b0c94d03813100d40a7289914625df00dcf608,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,State:CONTAINER_RUNNING,CreatedAt:17228
98224463679631,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-addons-435364,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 18812a5d71e8307dfae178321f661472,},Annotations:map[string]string{io.kubernetes.container.hash: bcb4918f,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:92eafd2fe5370e20300cf4b57a5758e16e3dee2bb64c465c25b601d07f7aa4c6,PodSandboxId:fef94c54938c430ddc6f396f0cac092b131d58fcf51ba251606475e1c80854d1,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,State:CONTAINER_RUNNING,CreatedAt:1722
898224394938513,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-addons-435364,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3913d430a2d94646f23a316dc2057cb3,},Annotations:map[string]string{io.kubernetes.container.hash: 9ffc2af7,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=ab2d2dcd-e83c-49b0-ad9f-30429fc753d5 name=/runtime.v1.RuntimeService/ListContainers
	Aug 05 22:59:41 addons-435364 crio[676]: time="2024-08-05 22:59:41.745328477Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=b660a4a5-a400-46c5-87cc-6a5b16a160f0 name=/runtime.v1.RuntimeService/Version
	Aug 05 22:59:41 addons-435364 crio[676]: time="2024-08-05 22:59:41.745425096Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=b660a4a5-a400-46c5-87cc-6a5b16a160f0 name=/runtime.v1.RuntimeService/Version
	Aug 05 22:59:41 addons-435364 crio[676]: time="2024-08-05 22:59:41.746864029Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=cba16371-7aed-4593-9eb1-09b548854c29 name=/runtime.v1.ImageService/ImageFsInfo
	Aug 05 22:59:41 addons-435364 crio[676]: time="2024-08-05 22:59:41.748147385Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1722898781748120860,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:589584,},InodesUsed:&UInt64Value{Value:210,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=cba16371-7aed-4593-9eb1-09b548854c29 name=/runtime.v1.ImageService/ImageFsInfo
	Aug 05 22:59:41 addons-435364 crio[676]: time="2024-08-05 22:59:41.748764302Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=720e2a33-b638-4bd7-9516-637489d4ea25 name=/runtime.v1.RuntimeService/ListContainers
	Aug 05 22:59:41 addons-435364 crio[676]: time="2024-08-05 22:59:41.748830629Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=720e2a33-b638-4bd7-9516-637489d4ea25 name=/runtime.v1.RuntimeService/ListContainers
	Aug 05 22:59:41 addons-435364 crio[676]: time="2024-08-05 22:59:41.749063435Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:9dce877f2c362a263adc22fa7c1dff8aa7deca2278b49b2cc88d482a8b6b4d04,PodSandboxId:8fd827d106ec2d0907c305fa69adb920d81b4078582840c0168b625b01ffa0a0,Metadata:&ContainerMetadata{Name:hello-world-app,Attempt:0,},Image:&ImageSpec{Image:docker.io/kicbase/echo-server@sha256:127ac38a2bb9537b7f252addff209ea6801edcac8a92c8b1104dacd66a583ed6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9056ab77afb8e18e04303f11000a9d31b3f16b74c59475b899ae1b342d328d30,State:CONTAINER_RUNNING,CreatedAt:1722898627258550448,Labels:map[string]string{io.kubernetes.container.name: hello-world-app,io.kubernetes.pod.name: hello-world-app-6778b5fc9f-nbsh9,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 18dc8ba2-00d5-49a3-891c-7e66fff40039,},Annotations:map[string]string{io.kubernetes.container.hash: 5f2dd573,io.kubernetes.container.
ports: [{\"containerPort\":8080,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:30e682946ae982b910587c4dfd32ee4b18fb9be6ffc0c0ed2c73c3bcaccab5b3,PodSandboxId:191ee1227f0613fe15909c9265bcdf71f2df55c0514d55c7c442ab0cc2dd6591,Metadata:&ContainerMetadata{Name:nginx,Attempt:0,},Image:&ImageSpec{Image:docker.io/library/nginx@sha256:208b70eefac13ee9be00e486f79c695b15cef861c680527171a27d253d834be9,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1ae23480369fa4139f6dec668d7a5a941b56ea174e9cf75e09771988fe621c95,State:CONTAINER_RUNNING,CreatedAt:1722898488055385763,Labels:map[string]string{io.kubernetes.container.name: nginx,io.kubernetes.pod.name: nginx,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: f96f1bbf-3982-41a3-94f0-5cab0827ddb3,},Annotations:map[string]string{io.kubernet
es.container.hash: cfbed574,io.kubernetes.container.ports: [{\"containerPort\":80,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:279179149c33b3043b28d0af3a8612082ccf6cd6248319270f1dfdc7fc567211,PodSandboxId:3231f54f5336a24e2ef0cf19c8327249775ae9e3c236f930ed00e3ef1110ed36,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c,State:CONTAINER_RUNNING,CreatedAt:1722898399559693191,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 973bc6a0-8c5b-48a5-a
795-cc389f59d219,},Annotations:map[string]string{io.kubernetes.container.hash: 2b7f493d,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:431e42b0b0b158402f10cb7b93827a107055987140e4dce351b570dc3f93facd,PodSandboxId:b020eb62a7b0f24faa5795c0c3d3869f774509a7eecdb34c96cfb5f299c3babf,Metadata:&ContainerMetadata{Name:metrics-server,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/metrics-server/metrics-server@sha256:31f034feb3f16062e93be7c40efc596553c89de172e2e412e588f02382388872,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a24c7c057ec8730aaa152f77366454835a46dc699fcf243698a622788fd48d62,State:CONTAINER_RUNNING,CreatedAt:1722898302555177776,Labels:map[string]string{io.kubernetes.container.name: metrics-server,io.kubernetes.pod.name: metrics-server-c59844bb4-m9t52,io.kubernetes.pod.namespace: kube-system,i
o.kubernetes.pod.uid: f825462d-de15-4aa7-9436-76eda3bbd66f,},Annotations:map[string]string{io.kubernetes.container.hash: 62e4cceb,io.kubernetes.container.ports: [{\"name\":\"https\",\"containerPort\":4443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7b5c994323214402a42053a26dbdf6aaa73eeb251beee1a898876e1c323893d5,PodSandboxId:3129664cc0275e797e40cebd7629f9013b4ad7642ad0896ad1cef672c78146a5,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1722898250327121182,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,
io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: cfbc5ee9-491f-4c8d-aecc-72ba061092ec,},Annotations:map[string]string{io.kubernetes.container.hash: 1c0a402c,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0abdbd3ed10f077f41965f1ab420f42938621e8dbc61df531790ac2ee7e9c40e,PodSandboxId:1055162b97d8516c24dc2a85ed57c9facac9405e2411fd71e3390b11f0b160b4,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1722898246723247613,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6
d8ff4d-ng8rk,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2091f1e9-b1aa-45fd-8197-0f661fcf784e,},Annotations:map[string]string{io.kubernetes.container.hash: 89f0555d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ffd14a580eef1dd67f8e26cf09eeb41251619feba45e4ab0d12f7f5b32879188,PodSandboxId:22bb8d21f29a210bd60addbae54caa6a518370f2cb4e18a6e41d2c21019b1d38,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:55bb025d2cfa592b9381
d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,State:CONTAINER_RUNNING,CreatedAt:1722898243472744671,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-lt8r2,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c1a7c99c-379f-4e2d-b241-4de97adffa76,},Annotations:map[string]string{io.kubernetes.container.hash: 3ab3dbc6,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e58d0c10af25f73f245cd49ac44d141e0b4dc75e8e4ac8995698b79ed373af5e,PodSandboxId:961be72ba0eb869239d98780834acef5b053ceeed32f94be162b3be2cf91ec70,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e
5ce67bc9a9795e2,State:CONTAINER_RUNNING,CreatedAt:1722898224416768929,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-addons-435364,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 321c366bd160eeee564705797a7fc2fc,},Annotations:map[string]string{io.kubernetes.container.hash: 7337c8d9,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:de461982723232193cc406adb03555f3314162eaba4b5e3472d116ab53272189,PodSandboxId:b4dfc4b300ffc8791cc6d909cd97644db5094407db26e8ee6de5b4357f14ce25,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTA
INER_RUNNING,CreatedAt:1722898224450036379,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-addons-435364,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0d200a2d8f14313b20affd7e51da4716,},Annotations:map[string]string{io.kubernetes.container.hash: f267c287,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b5b169a97f6f0fee85e8a3c58958ef344c63040a0d46d50b287ab5277d491e7d,PodSandboxId:f3b9318379ac35b248f9d1a079b0c94d03813100d40a7289914625df00dcf608,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,State:CONTAINER_RUNNING,CreatedAt:17228
98224463679631,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-addons-435364,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 18812a5d71e8307dfae178321f661472,},Annotations:map[string]string{io.kubernetes.container.hash: bcb4918f,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:92eafd2fe5370e20300cf4b57a5758e16e3dee2bb64c465c25b601d07f7aa4c6,PodSandboxId:fef94c54938c430ddc6f396f0cac092b131d58fcf51ba251606475e1c80854d1,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,State:CONTAINER_RUNNING,CreatedAt:1722
898224394938513,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-addons-435364,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3913d430a2d94646f23a316dc2057cb3,},Annotations:map[string]string{io.kubernetes.container.hash: 9ffc2af7,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=720e2a33-b638-4bd7-9516-637489d4ea25 name=/runtime.v1.RuntimeService/ListContainers
	Aug 05 22:59:41 addons-435364 crio[676]: time="2024-08-05 22:59:41.782892951Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=d122127d-186b-4793-855a-848291b8ebb1 name=/runtime.v1.RuntimeService/Version
	Aug 05 22:59:41 addons-435364 crio[676]: time="2024-08-05 22:59:41.782995269Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=d122127d-186b-4793-855a-848291b8ebb1 name=/runtime.v1.RuntimeService/Version
	Aug 05 22:59:41 addons-435364 crio[676]: time="2024-08-05 22:59:41.784279644Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=51f22c90-802f-4548-b78c-62f9ace38402 name=/runtime.v1.ImageService/ImageFsInfo
	Aug 05 22:59:41 addons-435364 crio[676]: time="2024-08-05 22:59:41.785532900Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1722898781785501765,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:589584,},InodesUsed:&UInt64Value{Value:210,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=51f22c90-802f-4548-b78c-62f9ace38402 name=/runtime.v1.ImageService/ImageFsInfo
	Aug 05 22:59:41 addons-435364 crio[676]: time="2024-08-05 22:59:41.786265370Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=448e75ed-1a9c-4cb9-92e3-66c3fa720263 name=/runtime.v1.RuntimeService/ListContainers
	Aug 05 22:59:41 addons-435364 crio[676]: time="2024-08-05 22:59:41.786338523Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=448e75ed-1a9c-4cb9-92e3-66c3fa720263 name=/runtime.v1.RuntimeService/ListContainers
	Aug 05 22:59:41 addons-435364 crio[676]: time="2024-08-05 22:59:41.786680818Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:9dce877f2c362a263adc22fa7c1dff8aa7deca2278b49b2cc88d482a8b6b4d04,PodSandboxId:8fd827d106ec2d0907c305fa69adb920d81b4078582840c0168b625b01ffa0a0,Metadata:&ContainerMetadata{Name:hello-world-app,Attempt:0,},Image:&ImageSpec{Image:docker.io/kicbase/echo-server@sha256:127ac38a2bb9537b7f252addff209ea6801edcac8a92c8b1104dacd66a583ed6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9056ab77afb8e18e04303f11000a9d31b3f16b74c59475b899ae1b342d328d30,State:CONTAINER_RUNNING,CreatedAt:1722898627258550448,Labels:map[string]string{io.kubernetes.container.name: hello-world-app,io.kubernetes.pod.name: hello-world-app-6778b5fc9f-nbsh9,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 18dc8ba2-00d5-49a3-891c-7e66fff40039,},Annotations:map[string]string{io.kubernetes.container.hash: 5f2dd573,io.kubernetes.container.
ports: [{\"containerPort\":8080,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:30e682946ae982b910587c4dfd32ee4b18fb9be6ffc0c0ed2c73c3bcaccab5b3,PodSandboxId:191ee1227f0613fe15909c9265bcdf71f2df55c0514d55c7c442ab0cc2dd6591,Metadata:&ContainerMetadata{Name:nginx,Attempt:0,},Image:&ImageSpec{Image:docker.io/library/nginx@sha256:208b70eefac13ee9be00e486f79c695b15cef861c680527171a27d253d834be9,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1ae23480369fa4139f6dec668d7a5a941b56ea174e9cf75e09771988fe621c95,State:CONTAINER_RUNNING,CreatedAt:1722898488055385763,Labels:map[string]string{io.kubernetes.container.name: nginx,io.kubernetes.pod.name: nginx,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: f96f1bbf-3982-41a3-94f0-5cab0827ddb3,},Annotations:map[string]string{io.kubernet
es.container.hash: cfbed574,io.kubernetes.container.ports: [{\"containerPort\":80,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:279179149c33b3043b28d0af3a8612082ccf6cd6248319270f1dfdc7fc567211,PodSandboxId:3231f54f5336a24e2ef0cf19c8327249775ae9e3c236f930ed00e3ef1110ed36,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c,State:CONTAINER_RUNNING,CreatedAt:1722898399559693191,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 973bc6a0-8c5b-48a5-a
795-cc389f59d219,},Annotations:map[string]string{io.kubernetes.container.hash: 2b7f493d,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:431e42b0b0b158402f10cb7b93827a107055987140e4dce351b570dc3f93facd,PodSandboxId:b020eb62a7b0f24faa5795c0c3d3869f774509a7eecdb34c96cfb5f299c3babf,Metadata:&ContainerMetadata{Name:metrics-server,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/metrics-server/metrics-server@sha256:31f034feb3f16062e93be7c40efc596553c89de172e2e412e588f02382388872,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a24c7c057ec8730aaa152f77366454835a46dc699fcf243698a622788fd48d62,State:CONTAINER_RUNNING,CreatedAt:1722898302555177776,Labels:map[string]string{io.kubernetes.container.name: metrics-server,io.kubernetes.pod.name: metrics-server-c59844bb4-m9t52,io.kubernetes.pod.namespace: kube-system,i
o.kubernetes.pod.uid: f825462d-de15-4aa7-9436-76eda3bbd66f,},Annotations:map[string]string{io.kubernetes.container.hash: 62e4cceb,io.kubernetes.container.ports: [{\"name\":\"https\",\"containerPort\":4443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7b5c994323214402a42053a26dbdf6aaa73eeb251beee1a898876e1c323893d5,PodSandboxId:3129664cc0275e797e40cebd7629f9013b4ad7642ad0896ad1cef672c78146a5,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1722898250327121182,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,
io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: cfbc5ee9-491f-4c8d-aecc-72ba061092ec,},Annotations:map[string]string{io.kubernetes.container.hash: 1c0a402c,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0abdbd3ed10f077f41965f1ab420f42938621e8dbc61df531790ac2ee7e9c40e,PodSandboxId:1055162b97d8516c24dc2a85ed57c9facac9405e2411fd71e3390b11f0b160b4,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1722898246723247613,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6
d8ff4d-ng8rk,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2091f1e9-b1aa-45fd-8197-0f661fcf784e,},Annotations:map[string]string{io.kubernetes.container.hash: 89f0555d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ffd14a580eef1dd67f8e26cf09eeb41251619feba45e4ab0d12f7f5b32879188,PodSandboxId:22bb8d21f29a210bd60addbae54caa6a518370f2cb4e18a6e41d2c21019b1d38,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:55bb025d2cfa592b9381
d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,State:CONTAINER_RUNNING,CreatedAt:1722898243472744671,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-lt8r2,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c1a7c99c-379f-4e2d-b241-4de97adffa76,},Annotations:map[string]string{io.kubernetes.container.hash: 3ab3dbc6,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e58d0c10af25f73f245cd49ac44d141e0b4dc75e8e4ac8995698b79ed373af5e,PodSandboxId:961be72ba0eb869239d98780834acef5b053ceeed32f94be162b3be2cf91ec70,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e
5ce67bc9a9795e2,State:CONTAINER_RUNNING,CreatedAt:1722898224416768929,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-addons-435364,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 321c366bd160eeee564705797a7fc2fc,},Annotations:map[string]string{io.kubernetes.container.hash: 7337c8d9,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:de461982723232193cc406adb03555f3314162eaba4b5e3472d116ab53272189,PodSandboxId:b4dfc4b300ffc8791cc6d909cd97644db5094407db26e8ee6de5b4357f14ce25,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTA
INER_RUNNING,CreatedAt:1722898224450036379,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-addons-435364,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0d200a2d8f14313b20affd7e51da4716,},Annotations:map[string]string{io.kubernetes.container.hash: f267c287,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b5b169a97f6f0fee85e8a3c58958ef344c63040a0d46d50b287ab5277d491e7d,PodSandboxId:f3b9318379ac35b248f9d1a079b0c94d03813100d40a7289914625df00dcf608,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,State:CONTAINER_RUNNING,CreatedAt:17228
98224463679631,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-addons-435364,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 18812a5d71e8307dfae178321f661472,},Annotations:map[string]string{io.kubernetes.container.hash: bcb4918f,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:92eafd2fe5370e20300cf4b57a5758e16e3dee2bb64c465c25b601d07f7aa4c6,PodSandboxId:fef94c54938c430ddc6f396f0cac092b131d58fcf51ba251606475e1c80854d1,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,State:CONTAINER_RUNNING,CreatedAt:1722
898224394938513,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-addons-435364,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3913d430a2d94646f23a316dc2057cb3,},Annotations:map[string]string{io.kubernetes.container.hash: 9ffc2af7,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=448e75ed-1a9c-4cb9-92e3-66c3fa720263 name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                                   CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	9dce877f2c362       docker.io/kicbase/echo-server@sha256:127ac38a2bb9537b7f252addff209ea6801edcac8a92c8b1104dacd66a583ed6                   2 minutes ago       Running             hello-world-app           0                   8fd827d106ec2       hello-world-app-6778b5fc9f-nbsh9
	30e682946ae98       docker.io/library/nginx@sha256:208b70eefac13ee9be00e486f79c695b15cef861c680527171a27d253d834be9                         4 minutes ago       Running             nginx                     0                   191ee1227f061       nginx
	279179149c33b       gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e                     6 minutes ago       Running             busybox                   0                   3231f54f5336a       busybox
	431e42b0b0b15       registry.k8s.io/metrics-server/metrics-server@sha256:31f034feb3f16062e93be7c40efc596553c89de172e2e412e588f02382388872   7 minutes ago       Running             metrics-server            0                   b020eb62a7b0f       metrics-server-c59844bb4-m9t52
	7b5c994323214       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                                        8 minutes ago       Running             storage-provisioner       0                   3129664cc0275       storage-provisioner
	0abdbd3ed10f0       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4                                                        8 minutes ago       Running             coredns                   0                   1055162b97d85       coredns-7db6d8ff4d-ng8rk
	ffd14a580eef1       55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1                                                        8 minutes ago       Running             kube-proxy                0                   22bb8d21f29a2       kube-proxy-lt8r2
	b5b169a97f6f0       76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e                                                        9 minutes ago       Running             kube-controller-manager   0                   f3b9318379ac3       kube-controller-manager-addons-435364
	de46198272323       3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899                                                        9 minutes ago       Running             etcd                      0                   b4dfc4b300ffc       etcd-addons-435364
	e58d0c10af25f       3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2                                                        9 minutes ago       Running             kube-scheduler            0                   961be72ba0eb8       kube-scheduler-addons-435364
	92eafd2fe5370       1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d                                                        9 minutes ago       Running             kube-apiserver            0                   fef94c54938c4       kube-apiserver-addons-435364
	
	
	==> coredns [0abdbd3ed10f077f41965f1ab420f42938621e8dbc61df531790ac2ee7e9c40e] <==
	[INFO] 10.244.0.7:59460 - 7528 "A IN registry.kube-system.svc.cluster.local.kube-system.svc.cluster.local. udp 86 false 512" NXDOMAIN qr,aa,rd 179 0.000197257s
	[INFO] 10.244.0.7:36528 - 7268 "AAAA IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 74 false 512" NXDOMAIN qr,aa,rd 167 0.000131316s
	[INFO] 10.244.0.7:36528 - 50016 "A IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 74 false 512" NXDOMAIN qr,aa,rd 167 0.000114669s
	[INFO] 10.244.0.7:50164 - 20174 "AAAA IN registry.kube-system.svc.cluster.local.cluster.local. udp 70 false 512" NXDOMAIN qr,aa,rd 163 0.000094933s
	[INFO] 10.244.0.7:50164 - 31949 "A IN registry.kube-system.svc.cluster.local.cluster.local. udp 70 false 512" NXDOMAIN qr,aa,rd 163 0.000147131s
	[INFO] 10.244.0.7:39320 - 31278 "AAAA IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 149 0.00010435s
	[INFO] 10.244.0.7:39320 - 58668 "A IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 110 0.000075439s
	[INFO] 10.244.0.7:53046 - 48151 "A IN registry.kube-system.svc.cluster.local.kube-system.svc.cluster.local. udp 86 false 512" NXDOMAIN qr,aa,rd 179 0.000088933s
	[INFO] 10.244.0.7:53046 - 45824 "AAAA IN registry.kube-system.svc.cluster.local.kube-system.svc.cluster.local. udp 86 false 512" NXDOMAIN qr,aa,rd 179 0.000070828s
	[INFO] 10.244.0.7:34899 - 32792 "A IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 74 false 512" NXDOMAIN qr,aa,rd 167 0.000073984s
	[INFO] 10.244.0.7:34899 - 64541 "AAAA IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 74 false 512" NXDOMAIN qr,aa,rd 167 0.000031781s
	[INFO] 10.244.0.7:44128 - 58280 "A IN registry.kube-system.svc.cluster.local.cluster.local. udp 70 false 512" NXDOMAIN qr,aa,rd 163 0.000070705s
	[INFO] 10.244.0.7:44128 - 48046 "AAAA IN registry.kube-system.svc.cluster.local.cluster.local. udp 70 false 512" NXDOMAIN qr,aa,rd 163 0.000053088s
	[INFO] 10.244.0.7:39146 - 12662 "A IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 110 0.000049025s
	[INFO] 10.244.0.7:39146 - 50551 "AAAA IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 149 0.000037963s
	[INFO] 10.244.0.21:55956 - 61218 "A IN storage.googleapis.com.gcp-auth.svc.cluster.local. udp 78 false 1232" NXDOMAIN qr,aa,rd 160 0.000362377s
	[INFO] 10.244.0.21:43818 - 41906 "AAAA IN storage.googleapis.com.gcp-auth.svc.cluster.local. udp 78 false 1232" NXDOMAIN qr,aa,rd 160 0.000070895s
	[INFO] 10.244.0.21:60942 - 59673 "A IN storage.googleapis.com.svc.cluster.local. udp 69 false 1232" NXDOMAIN qr,aa,rd 151 0.000117368s
	[INFO] 10.244.0.21:36743 - 47392 "AAAA IN storage.googleapis.com.svc.cluster.local. udp 69 false 1232" NXDOMAIN qr,aa,rd 151 0.000062908s
	[INFO] 10.244.0.21:54746 - 35272 "AAAA IN storage.googleapis.com.cluster.local. udp 65 false 1232" NXDOMAIN qr,aa,rd 147 0.000172605s
	[INFO] 10.244.0.21:32900 - 59100 "A IN storage.googleapis.com.cluster.local. udp 65 false 1232" NXDOMAIN qr,aa,rd 147 0.000108178s
	[INFO] 10.244.0.21:36957 - 16274 "AAAA IN storage.googleapis.com. udp 51 false 1232" NOERROR qr,rd,ra 240 0.000499387s
	[INFO] 10.244.0.21:42027 - 9559 "A IN storage.googleapis.com. udp 51 false 1232" NOERROR qr,rd,ra 496 0.000601238s
	[INFO] 10.244.0.26:43054 - 2 "AAAA IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 149 0.000443228s
	[INFO] 10.244.0.26:49964 - 3 "A IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 110 0.000095956s
	
	
	==> describe nodes <==
	Name:               addons-435364
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=addons-435364
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=a179f531dd2dbe55e0d6074abcbc378280f91bb4
	                    minikube.k8s.io/name=addons-435364
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_08_05T22_50_30_0700
	                    minikube.k8s.io/version=v1.33.1
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	                    topology.hostpath.csi/node=addons-435364
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 05 Aug 2024 22:50:26 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  addons-435364
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 05 Aug 2024 22:59:40 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 05 Aug 2024 22:57:40 +0000   Mon, 05 Aug 2024 22:50:25 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 05 Aug 2024 22:57:40 +0000   Mon, 05 Aug 2024 22:50:25 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 05 Aug 2024 22:57:40 +0000   Mon, 05 Aug 2024 22:50:25 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 05 Aug 2024 22:57:40 +0000   Mon, 05 Aug 2024 22:50:30 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.129
	  Hostname:    addons-435364
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             3912780Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             3912780Ki
	  pods:               110
	System Info:
	  Machine ID:                 242967d5bc594151bd5fc013cd6dfd9d
	  System UUID:                242967d5-bc59-4151-bd5f-c013cd6dfd9d
	  Boot ID:                    bba553dc-ef04-4531-98f6-7a74d426d8f2
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.30.3
	  Kube-Proxy Version:         v1.30.3
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (11 in total)
	  Namespace                   Name                                     CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                     ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                  0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         6m26s
	  default                     hello-world-app-6778b5fc9f-nbsh9         0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         2m38s
	  default                     nginx                                    0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m59s
	  kube-system                 coredns-7db6d8ff4d-ng8rk                 100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (1%!)(MISSING)        170Mi (4%!)(MISSING)     8m59s
	  kube-system                 etcd-addons-435364                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (2%!)(MISSING)       0 (0%!)(MISSING)         9m14s
	  kube-system                 kube-apiserver-addons-435364             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         9m13s
	  kube-system                 kube-controller-manager-addons-435364    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         9m13s
	  kube-system                 kube-proxy-lt8r2                         0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         8m59s
	  kube-system                 kube-scheduler-addons-435364             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         9m14s
	  kube-system                 metrics-server-c59844bb4-m9t52           100m (5%!)(MISSING)     0 (0%!)(MISSING)      200Mi (5%!)(MISSING)       0 (0%!)(MISSING)         8m53s
	  kube-system                 storage-provisioner                      0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         8m54s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (42%!)(MISSING)  0 (0%!)(MISSING)
	  memory             370Mi (9%!)(MISSING)  170Mi (4%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age    From             Message
	  ----    ------                   ----   ----             -------
	  Normal  Starting                 8m57s  kube-proxy       
	  Normal  Starting                 9m13s  kubelet          Starting kubelet.
	  Normal  NodeAllocatableEnforced  9m13s  kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  9m13s  kubelet          Node addons-435364 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    9m13s  kubelet          Node addons-435364 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     9m13s  kubelet          Node addons-435364 status is now: NodeHasSufficientPID
	  Normal  NodeReady                9m12s  kubelet          Node addons-435364 status is now: NodeReady
	  Normal  RegisteredNode           9m     node-controller  Node addons-435364 event: Registered Node addons-435364 in Controller
	
	
	==> dmesg <==
	[ +28.171030] kauditd_printk_skb: 4 callbacks suppressed
	[ +10.816151] kauditd_printk_skb: 27 callbacks suppressed
	[  +8.115843] kauditd_printk_skb: 2 callbacks suppressed
	[  +5.116573] kauditd_printk_skb: 12 callbacks suppressed
	[  +5.116063] kauditd_printk_skb: 20 callbacks suppressed
	[Aug 5 22:52] kauditd_printk_skb: 88 callbacks suppressed
	[  +8.807448] kauditd_printk_skb: 12 callbacks suppressed
	[ +22.023864] kauditd_printk_skb: 24 callbacks suppressed
	[ +14.122958] kauditd_printk_skb: 24 callbacks suppressed
	[Aug 5 22:53] kauditd_printk_skb: 3 callbacks suppressed
	[  +5.359583] kauditd_printk_skb: 16 callbacks suppressed
	[  +7.631434] kauditd_printk_skb: 24 callbacks suppressed
	[  +8.957808] kauditd_printk_skb: 6 callbacks suppressed
	[  +5.897545] kauditd_printk_skb: 13 callbacks suppressed
	[  +5.002009] kauditd_printk_skb: 15 callbacks suppressed
	[  +5.087794] kauditd_printk_skb: 34 callbacks suppressed
	[  +6.316599] kauditd_printk_skb: 31 callbacks suppressed
	[Aug 5 22:54] kauditd_printk_skb: 7 callbacks suppressed
	[  +5.475253] kauditd_printk_skb: 15 callbacks suppressed
	[  +7.942293] kauditd_printk_skb: 6 callbacks suppressed
	[  +9.168342] kauditd_printk_skb: 10 callbacks suppressed
	[  +8.130342] kauditd_printk_skb: 33 callbacks suppressed
	[  +6.823058] kauditd_printk_skb: 40 callbacks suppressed
	[Aug 5 22:57] kauditd_printk_skb: 41 callbacks suppressed
	[Aug 5 22:59] kauditd_printk_skb: 28 callbacks suppressed
	
	
	==> etcd [de461982723232193cc406adb03555f3314162eaba4b5e3472d116ab53272189] <==
	{"level":"info","ts":"2024-08-05T22:51:48.073317Z","caller":"traceutil/trace.go:171","msg":"trace[45443577] range","detail":"{range_begin:/registry/pods/ingress-nginx/; range_end:/registry/pods/ingress-nginx0; response_count:3; response_revision:976; }","duration":"187.000952ms","start":"2024-08-05T22:51:47.886308Z","end":"2024-08-05T22:51:48.073309Z","steps":["trace[45443577] 'agreement among raft nodes before linearized reading'  (duration: 186.862347ms)"],"step_count":1}
	{"level":"warn","ts":"2024-08-05T22:51:48.073518Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"199.500646ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods/kube-system/\" range_end:\"/registry/pods/kube-system0\" ","response":"range_response_count:18 size:85510"}
	{"level":"info","ts":"2024-08-05T22:51:48.073648Z","caller":"traceutil/trace.go:171","msg":"trace[635675946] range","detail":"{range_begin:/registry/pods/kube-system/; range_end:/registry/pods/kube-system0; response_count:18; response_revision:976; }","duration":"199.653443ms","start":"2024-08-05T22:51:47.873984Z","end":"2024-08-05T22:51:48.073638Z","steps":["trace[635675946] 'agreement among raft nodes before linearized reading'  (duration: 199.305766ms)"],"step_count":1}
	{"level":"info","ts":"2024-08-05T22:51:51.775295Z","caller":"traceutil/trace.go:171","msg":"trace[1982578713] transaction","detail":"{read_only:false; response_revision:1002; number_of_response:1; }","duration":"304.708861ms","start":"2024-08-05T22:51:51.470572Z","end":"2024-08-05T22:51:51.77528Z","steps":["trace[1982578713] 'process raft request'  (duration: 304.611449ms)"],"step_count":1}
	{"level":"warn","ts":"2024-08-05T22:51:51.775421Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-08-05T22:51:51.470555Z","time spent":"304.780957ms","remote":"127.0.0.1:43460","response type":"/etcdserverpb.KV/Txn","request count":1,"request size":541,"response count":0,"response size":39,"request content":"compare:<target:MOD key:\"/registry/leases/kube-node-lease/addons-435364\" mod_revision:961 > success:<request_put:<key:\"/registry/leases/kube-node-lease/addons-435364\" value_size:487 >> failure:<request_range:<key:\"/registry/leases/kube-node-lease/addons-435364\" > >"}
	{"level":"info","ts":"2024-08-05T22:52:08.222785Z","caller":"traceutil/trace.go:171","msg":"trace[354439898] transaction","detail":"{read_only:false; response_revision:1133; number_of_response:1; }","duration":"334.360711ms","start":"2024-08-05T22:52:07.887639Z","end":"2024-08-05T22:52:08.222Z","steps":["trace[354439898] 'process raft request'  (duration: 333.7897ms)"],"step_count":1}
	{"level":"warn","ts":"2024-08-05T22:52:08.222902Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-08-05T22:52:07.887584Z","time spent":"335.265726ms","remote":"127.0.0.1:43460","response type":"/etcdserverpb.KV/Txn","request count":1,"request size":539,"response count":0,"response size":39,"request content":"compare:<target:MOD key:\"/registry/leases/kube-system/external-health-monitor-leader-hostpath-csi-k8s-io\" mod_revision:1108 > success:<request_put:<key:\"/registry/leases/kube-system/external-health-monitor-leader-hostpath-csi-k8s-io\" value_size:452 >> failure:<request_range:<key:\"/registry/leases/kube-system/external-health-monitor-leader-hostpath-csi-k8s-io\" > >"}
	{"level":"warn","ts":"2024-08-05T22:53:11.816333Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"434.009449ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods/ingress-nginx/\" range_end:\"/registry/pods/ingress-nginx0\" ","response":"range_response_count:3 size:14363"}
	{"level":"info","ts":"2024-08-05T22:53:11.817453Z","caller":"traceutil/trace.go:171","msg":"trace[228844107] range","detail":"{range_begin:/registry/pods/ingress-nginx/; range_end:/registry/pods/ingress-nginx0; response_count:3; response_revision:1261; }","duration":"435.168121ms","start":"2024-08-05T22:53:11.382259Z","end":"2024-08-05T22:53:11.817427Z","steps":["trace[228844107] 'range keys from in-memory index tree'  (duration: 433.88495ms)"],"step_count":1}
	{"level":"warn","ts":"2024-08-05T22:53:11.817712Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-08-05T22:53:11.382242Z","time spent":"435.44447ms","remote":"127.0.0.1:43364","response type":"/etcdserverpb.KV/Range","request count":0,"request size":62,"response count":3,"response size":14386,"request content":"key:\"/registry/pods/ingress-nginx/\" range_end:\"/registry/pods/ingress-nginx0\" "}
	{"level":"info","ts":"2024-08-05T22:53:14.531Z","caller":"traceutil/trace.go:171","msg":"trace[113767598] linearizableReadLoop","detail":"{readStateIndex:1317; appliedIndex:1316; }","duration":"308.064941ms","start":"2024-08-05T22:53:14.222859Z","end":"2024-08-05T22:53:14.530924Z","steps":["trace[113767598] 'read index received'  (duration: 302.130955ms)","trace[113767598] 'applied index is now lower than readState.Index'  (duration: 5.933063ms)"],"step_count":2}
	{"level":"warn","ts":"2024-08-05T22:53:14.531324Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"308.382815ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/health\" ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-08-05T22:53:14.531425Z","caller":"traceutil/trace.go:171","msg":"trace[1706853534] range","detail":"{range_begin:/registry/health; range_end:; response_count:0; response_revision:1264; }","duration":"308.580107ms","start":"2024-08-05T22:53:14.222834Z","end":"2024-08-05T22:53:14.531414Z","steps":["trace[1706853534] 'agreement among raft nodes before linearized reading'  (duration: 308.379139ms)"],"step_count":1}
	{"level":"warn","ts":"2024-08-05T22:53:14.531525Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-08-05T22:53:14.22281Z","time spent":"308.708886ms","remote":"127.0.0.1:53582","response type":"/etcdserverpb.KV/Range","request count":0,"request size":18,"response count":0,"response size":28,"request content":"key:\"/registry/health\" "}
	{"level":"warn","ts":"2024-08-05T22:53:14.531656Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"154.153515ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/persistentvolumes/\" range_end:\"/registry/persistentvolumes0\" count_only:true ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-08-05T22:53:14.53174Z","caller":"traceutil/trace.go:171","msg":"trace[161278946] range","detail":"{range_begin:/registry/persistentvolumes/; range_end:/registry/persistentvolumes0; response_count:0; response_revision:1264; }","duration":"154.323724ms","start":"2024-08-05T22:53:14.377404Z","end":"2024-08-05T22:53:14.531728Z","steps":["trace[161278946] 'agreement among raft nodes before linearized reading'  (duration: 154.028639ms)"],"step_count":1}
	{"level":"warn","ts":"2024-08-05T22:53:14.53203Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"149.103294ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods/ingress-nginx/\" range_end:\"/registry/pods/ingress-nginx0\" ","response":"range_response_count:3 size:14363"}
	{"level":"info","ts":"2024-08-05T22:53:14.532759Z","caller":"traceutil/trace.go:171","msg":"trace[2037048085] range","detail":"{range_begin:/registry/pods/ingress-nginx/; range_end:/registry/pods/ingress-nginx0; response_count:3; response_revision:1264; }","duration":"149.913604ms","start":"2024-08-05T22:53:14.382832Z","end":"2024-08-05T22:53:14.532745Z","steps":["trace[2037048085] 'agreement among raft nodes before linearized reading'  (duration: 149.113811ms)"],"step_count":1}
	{"level":"info","ts":"2024-08-05T22:54:02.633083Z","caller":"traceutil/trace.go:171","msg":"trace[279443207] linearizableReadLoop","detail":"{readStateIndex:1622; appliedIndex:1621; }","duration":"270.210508ms","start":"2024-08-05T22:54:02.362847Z","end":"2024-08-05T22:54:02.633057Z","steps":["trace[279443207] 'read index received'  (duration: 270.089567ms)","trace[279443207] 'applied index is now lower than readState.Index'  (duration: 120.37µs)"],"step_count":2}
	{"level":"info","ts":"2024-08-05T22:54:02.633779Z","caller":"traceutil/trace.go:171","msg":"trace[950448534] transaction","detail":"{read_only:false; response_revision:1553; number_of_response:1; }","duration":"383.315909ms","start":"2024-08-05T22:54:02.250447Z","end":"2024-08-05T22:54:02.633763Z","steps":["trace[950448534] 'process raft request'  (duration: 382.527998ms)"],"step_count":1}
	{"level":"warn","ts":"2024-08-05T22:54:02.634777Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-08-05T22:54:02.250429Z","time spent":"384.234427ms","remote":"127.0.0.1:43460","response type":"/etcdserverpb.KV/Txn","request count":1,"request size":677,"response count":0,"response size":39,"request content":"compare:<target:MOD key:\"/registry/leases/kube-system/apiserver-ikapivztbtbzzxquhxsg22mb5m\" mod_revision:1480 > success:<request_put:<key:\"/registry/leases/kube-system/apiserver-ikapivztbtbzzxquhxsg22mb5m\" value_size:604 >> failure:<request_range:<key:\"/registry/leases/kube-system/apiserver-ikapivztbtbzzxquhxsg22mb5m\" > >"}
	{"level":"warn","ts":"2024-08-05T22:54:02.63548Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"270.437051ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods/default/\" range_end:\"/registry/pods/default0\" ","response":"range_response_count:3 size:8910"}
	{"level":"info","ts":"2024-08-05T22:54:02.636263Z","caller":"traceutil/trace.go:171","msg":"trace[1506143419] range","detail":"{range_begin:/registry/pods/default/; range_end:/registry/pods/default0; response_count:3; response_revision:1553; }","duration":"273.447196ms","start":"2024-08-05T22:54:02.362802Z","end":"2024-08-05T22:54:02.636249Z","steps":["trace[1506143419] 'agreement among raft nodes before linearized reading'  (duration: 270.364005ms)"],"step_count":1}
	{"level":"info","ts":"2024-08-05T22:54:36.225748Z","caller":"traceutil/trace.go:171","msg":"trace[2002722119] transaction","detail":"{read_only:false; response_revision:1756; number_of_response:1; }","duration":"268.925174ms","start":"2024-08-05T22:54:35.956797Z","end":"2024-08-05T22:54:36.225722Z","steps":["trace[2002722119] 'process raft request'  (duration: 268.594751ms)"],"step_count":1}
	{"level":"info","ts":"2024-08-05T22:55:13.496992Z","caller":"traceutil/trace.go:171","msg":"trace[520917928] transaction","detail":"{read_only:false; response_revision:1998; number_of_response:1; }","duration":"190.518517ms","start":"2024-08-05T22:55:13.306449Z","end":"2024-08-05T22:55:13.496968Z","steps":["trace[520917928] 'process raft request'  (duration: 190.350616ms)"],"step_count":1}
	
	
	==> kernel <==
	 22:59:42 up 9 min,  0 users,  load average: 0.09, 0.50, 0.41
	Linux addons-435364 5.10.207 #1 SMP Mon Jul 29 15:19:02 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kube-apiserver [92eafd2fe5370e20300cf4b57a5758e16e3dee2bb64c465c25b601d07f7aa4c6] <==
	E0805 22:52:50.378858       1 available_controller.go:460] v1beta1.metrics.k8s.io failed with: failing or missing response from https://10.106.162.58:443/apis/metrics.k8s.io/v1beta1: Get "https://10.106.162.58:443/apis/metrics.k8s.io/v1beta1": dial tcp 10.106.162.58:443: connect: connection refused
	E0805 22:52:50.380301       1 available_controller.go:460] v1beta1.metrics.k8s.io failed with: failing or missing response from https://10.106.162.58:443/apis/metrics.k8s.io/v1beta1: Get "https://10.106.162.58:443/apis/metrics.k8s.io/v1beta1": dial tcp 10.106.162.58:443: connect: connection refused
	E0805 22:52:50.385748       1 available_controller.go:460] v1beta1.metrics.k8s.io failed with: failing or missing response from https://10.106.162.58:443/apis/metrics.k8s.io/v1beta1: Get "https://10.106.162.58:443/apis/metrics.k8s.io/v1beta1": dial tcp 10.106.162.58:443: connect: connection refused
	I0805 22:52:50.459080       1 handler.go:286] Adding GroupVersion metrics.k8s.io v1beta1 to ResourceManager
	E0805 22:53:27.476311       1 conn.go:339] Error on socket receive: read tcp 192.168.39.129:8443->192.168.39.1:53210: use of closed network connection
	E0805 22:53:27.697934       1 conn.go:339] Error on socket receive: read tcp 192.168.39.129:8443->192.168.39.1:53248: use of closed network connection
	E0805 22:54:04.812241       1 authentication.go:73] "Unable to authenticate the request" err="[invalid bearer token, serviceaccounts \"local-path-provisioner-service-account\" not found]"
	I0805 22:54:10.424991       1 controller.go:615] quota admission added evaluator for: volumesnapshots.snapshot.storage.k8s.io
	I0805 22:54:32.490718       1 alloc.go:330] "allocated clusterIPs" service="headlamp/headlamp" clusterIPs={"IPv4":"10.106.9.27"}
	I0805 22:54:38.096670       1 handler.go:286] Adding GroupVersion gadget.kinvolk.io v1alpha1 to ResourceManager
	W0805 22:54:39.159214       1 cacher.go:168] Terminating all watchers from cacher traces.gadget.kinvolk.io
	I0805 22:54:43.600695       1 controller.go:615] quota admission added evaluator for: ingresses.networking.k8s.io
	I0805 22:54:43.767931       1 alloc.go:330] "allocated clusterIPs" service="default/nginx" clusterIPs={"IPv4":"10.97.101.124"}
	I0805 22:54:45.477321       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0805 22:54:45.477522       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I0805 22:54:45.511195       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0805 22:54:45.511911       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I0805 22:54:45.529030       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0805 22:54:45.529150       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I0805 22:54:45.549126       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0805 22:54:45.549177       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	W0805 22:54:46.512892       1 cacher.go:168] Terminating all watchers from cacher volumesnapshotclasses.snapshot.storage.k8s.io
	W0805 22:54:46.550102       1 cacher.go:168] Terminating all watchers from cacher volumesnapshotcontents.snapshot.storage.k8s.io
	W0805 22:54:46.585572       1 cacher.go:168] Terminating all watchers from cacher volumesnapshots.snapshot.storage.k8s.io
	I0805 22:57:04.355195       1 alloc.go:330] "allocated clusterIPs" service="default/hello-world-app" clusterIPs={"IPv4":"10.109.16.70"}
	
	
	==> kube-controller-manager [b5b169a97f6f0fee85e8a3c58958ef344c63040a0d46d50b287ab5277d491e7d] <==
	W0805 22:57:26.130859       1 reflector.go:547] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0805 22:57:26.131006       1 reflector.go:150] k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	W0805 22:57:40.270475       1 reflector.go:547] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0805 22:57:40.270549       1 reflector.go:150] k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	W0805 22:57:43.400907       1 reflector.go:547] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0805 22:57:43.400955       1 reflector.go:150] k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	W0805 22:58:02.915804       1 reflector.go:547] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0805 22:58:02.915944       1 reflector.go:150] k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	W0805 22:58:18.963879       1 reflector.go:547] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0805 22:58:18.963943       1 reflector.go:150] k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	W0805 22:58:19.769738       1 reflector.go:547] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0805 22:58:19.769897       1 reflector.go:150] k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	W0805 22:58:42.442789       1 reflector.go:547] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0805 22:58:42.442978       1 reflector.go:150] k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	W0805 22:58:50.197469       1 reflector.go:547] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0805 22:58:50.197556       1 reflector.go:150] k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	W0805 22:59:13.499937       1 reflector.go:547] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0805 22:59:13.500117       1 reflector.go:150] k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	W0805 22:59:14.139474       1 reflector.go:547] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0805 22:59:14.139531       1 reflector.go:150] k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	W0805 22:59:29.810049       1 reflector.go:547] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0805 22:59:29.810097       1 reflector.go:150] k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	I0805 22:59:40.764979       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/metrics-server-c59844bb4" duration="11.359µs"
	W0805 22:59:41.334572       1 reflector.go:547] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0805 22:59:41.334805       1 reflector.go:150] k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	
	
	==> kube-proxy [ffd14a580eef1dd67f8e26cf09eeb41251619feba45e4ab0d12f7f5b32879188] <==
	I0805 22:50:43.764864       1 server_linux.go:69] "Using iptables proxy"
	I0805 22:50:43.852565       1 server.go:1062] "Successfully retrieved node IP(s)" IPs=["192.168.39.129"]
	I0805 22:50:44.051571       1 server_linux.go:143] "No iptables support for family" ipFamily="IPv6"
	I0805 22:50:44.051657       1 server.go:661] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0805 22:50:44.051673       1 server_linux.go:165] "Using iptables Proxier"
	I0805 22:50:44.059768       1 proxier.go:243] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I0805 22:50:44.059988       1 server.go:872] "Version info" version="v1.30.3"
	I0805 22:50:44.060024       1 server.go:874] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0805 22:50:44.061190       1 config.go:192] "Starting service config controller"
	I0805 22:50:44.061203       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0805 22:50:44.061227       1 config.go:101] "Starting endpoint slice config controller"
	I0805 22:50:44.061230       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0805 22:50:44.062014       1 config.go:319] "Starting node config controller"
	I0805 22:50:44.062025       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0805 22:50:44.161389       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I0805 22:50:44.161430       1 shared_informer.go:320] Caches are synced for service config
	I0805 22:50:44.162724       1 shared_informer.go:320] Caches are synced for node config
	
	
	==> kube-scheduler [e58d0c10af25f73f245cd49ac44d141e0b4dc75e8e4ac8995698b79ed373af5e] <==
	W0805 22:50:27.801729       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0805 22:50:27.801776       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	W0805 22:50:27.847448       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0805 22:50:27.847766       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	W0805 22:50:27.847709       1 reflector.go:547] runtime/asm_amd64.s:1695: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0805 22:50:27.847950       1 reflector.go:150] runtime/asm_amd64.s:1695: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	W0805 22:50:27.937590       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0805 22:50:27.938006       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	W0805 22:50:28.046502       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0805 22:50:28.046758       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	W0805 22:50:28.079240       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0805 22:50:28.079360       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	W0805 22:50:28.083787       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E0805 22:50:28.083905       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	W0805 22:50:28.139963       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E0805 22:50:28.140065       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	W0805 22:50:28.167314       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E0805 22:50:28.167440       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	W0805 22:50:28.180871       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0805 22:50:28.181015       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	W0805 22:50:28.191763       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0805 22:50:28.192243       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	W0805 22:50:28.201723       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0805 22:50:28.201840       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	I0805 22:50:30.526104       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Aug 05 22:57:29 addons-435364 kubelet[1261]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Aug 05 22:57:29 addons-435364 kubelet[1261]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Aug 05 22:57:29 addons-435364 kubelet[1261]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Aug 05 22:57:29 addons-435364 kubelet[1261]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Aug 05 22:57:30 addons-435364 kubelet[1261]: I0805 22:57:30.204260    1261 scope.go:117] "RemoveContainer" containerID="e99dd1415b76169a8c5445723cbdd7ed97fdcb7634e1df69cd4bfbe931586e5c"
	Aug 05 22:57:30 addons-435364 kubelet[1261]: I0805 22:57:30.239026    1261 scope.go:117] "RemoveContainer" containerID="85cf8d86410bef1c025ffe59434653e15360767265658f89ea62a0c43a9e5ca2"
	Aug 05 22:57:39 addons-435364 kubelet[1261]: I0805 22:57:39.654559    1261 kubelet_pods.go:988] "Unable to retrieve pull secret, the image pull may not succeed." pod="default/busybox" secret="" err="secret \"gcp-auth\" not found"
	Aug 05 22:58:29 addons-435364 kubelet[1261]: E0805 22:58:29.686773    1261 iptables.go:577] "Could not set up iptables canary" err=<
	Aug 05 22:58:29 addons-435364 kubelet[1261]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Aug 05 22:58:29 addons-435364 kubelet[1261]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Aug 05 22:58:29 addons-435364 kubelet[1261]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Aug 05 22:58:29 addons-435364 kubelet[1261]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Aug 05 22:58:48 addons-435364 kubelet[1261]: I0805 22:58:48.654412    1261 kubelet_pods.go:988] "Unable to retrieve pull secret, the image pull may not succeed." pod="default/busybox" secret="" err="secret \"gcp-auth\" not found"
	Aug 05 22:59:29 addons-435364 kubelet[1261]: E0805 22:59:29.684301    1261 iptables.go:577] "Could not set up iptables canary" err=<
	Aug 05 22:59:29 addons-435364 kubelet[1261]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Aug 05 22:59:29 addons-435364 kubelet[1261]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Aug 05 22:59:29 addons-435364 kubelet[1261]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Aug 05 22:59:29 addons-435364 kubelet[1261]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Aug 05 22:59:40 addons-435364 kubelet[1261]: I0805 22:59:40.796354    1261 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="default/hello-world-app-6778b5fc9f-nbsh9" podStartSLOduration=154.294422463 podStartE2EDuration="2m36.796320532s" podCreationTimestamp="2024-08-05 22:57:04 +0000 UTC" firstStartedPulling="2024-08-05 22:57:04.741836908 +0000 UTC m=+395.219086510" lastFinishedPulling="2024-08-05 22:57:07.243734978 +0000 UTC m=+397.720984579" observedRunningTime="2024-08-05 22:57:07.377990007 +0000 UTC m=+397.855239608" watchObservedRunningTime="2024-08-05 22:59:40.796320532 +0000 UTC m=+551.273570152"
	Aug 05 22:59:42 addons-435364 kubelet[1261]: I0805 22:59:42.184343    1261 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/f825462d-de15-4aa7-9436-76eda3bbd66f-tmp-dir\") pod \"f825462d-de15-4aa7-9436-76eda3bbd66f\" (UID: \"f825462d-de15-4aa7-9436-76eda3bbd66f\") "
	Aug 05 22:59:42 addons-435364 kubelet[1261]: I0805 22:59:42.184928    1261 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"kube-api-access-tlzkt\" (UniqueName: \"kubernetes.io/projected/f825462d-de15-4aa7-9436-76eda3bbd66f-kube-api-access-tlzkt\") pod \"f825462d-de15-4aa7-9436-76eda3bbd66f\" (UID: \"f825462d-de15-4aa7-9436-76eda3bbd66f\") "
	Aug 05 22:59:42 addons-435364 kubelet[1261]: I0805 22:59:42.185146    1261 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f825462d-de15-4aa7-9436-76eda3bbd66f-tmp-dir" (OuterVolumeSpecName: "tmp-dir") pod "f825462d-de15-4aa7-9436-76eda3bbd66f" (UID: "f825462d-de15-4aa7-9436-76eda3bbd66f"). InnerVolumeSpecName "tmp-dir". PluginName "kubernetes.io/empty-dir", VolumeGidValue ""
	Aug 05 22:59:42 addons-435364 kubelet[1261]: I0805 22:59:42.195872    1261 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f825462d-de15-4aa7-9436-76eda3bbd66f-kube-api-access-tlzkt" (OuterVolumeSpecName: "kube-api-access-tlzkt") pod "f825462d-de15-4aa7-9436-76eda3bbd66f" (UID: "f825462d-de15-4aa7-9436-76eda3bbd66f"). InnerVolumeSpecName "kube-api-access-tlzkt". PluginName "kubernetes.io/projected", VolumeGidValue ""
	Aug 05 22:59:42 addons-435364 kubelet[1261]: I0805 22:59:42.285457    1261 reconciler_common.go:289] "Volume detached for volume \"kube-api-access-tlzkt\" (UniqueName: \"kubernetes.io/projected/f825462d-de15-4aa7-9436-76eda3bbd66f-kube-api-access-tlzkt\") on node \"addons-435364\" DevicePath \"\""
	Aug 05 22:59:42 addons-435364 kubelet[1261]: I0805 22:59:42.285520    1261 reconciler_common.go:289] "Volume detached for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/f825462d-de15-4aa7-9436-76eda3bbd66f-tmp-dir\") on node \"addons-435364\" DevicePath \"\""
	
	
	==> storage-provisioner [7b5c994323214402a42053a26dbdf6aaa73eeb251beee1a898876e1c323893d5] <==
	I0805 22:50:50.943746       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0805 22:50:50.963903       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0805 22:50:50.964017       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I0805 22:50:50.990996       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0805 22:50:50.991300       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_addons-435364_e0160293-cccb-4792-999f-05db47e0382d!
	I0805 22:50:51.002504       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"c09a9f56-5d3d-4b22-8bb5-14529760680a", APIVersion:"v1", ResourceVersion:"626", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' addons-435364_e0160293-cccb-4792-999f-05db47e0382d became leader
	I0805 22:50:51.093124       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_addons-435364_e0160293-cccb-4792-999f-05db47e0382d!
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p addons-435364 -n addons-435364
helpers_test.go:261: (dbg) Run:  kubectl --context addons-435364 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:272: non-running pods: metrics-server-c59844bb4-m9t52
helpers_test.go:274: ======> post-mortem[TestAddons/parallel/MetricsServer]: describe non-running pods <======
helpers_test.go:277: (dbg) Run:  kubectl --context addons-435364 describe pod metrics-server-c59844bb4-m9t52
helpers_test.go:277: (dbg) Non-zero exit: kubectl --context addons-435364 describe pod metrics-server-c59844bb4-m9t52: exit status 1 (62.324113ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): pods "metrics-server-c59844bb4-m9t52" not found

                                                
                                                
** /stderr **
helpers_test.go:279: kubectl --context addons-435364 describe pod metrics-server-c59844bb4-m9t52: exit status 1
--- FAIL: TestAddons/parallel/MetricsServer (366.97s)

                                                
                                    
x
+
TestAddons/StoppedEnableDisable (154.38s)

                                                
                                                
=== RUN   TestAddons/StoppedEnableDisable
addons_test.go:174: (dbg) Run:  out/minikube-linux-amd64 stop -p addons-435364
addons_test.go:174: (dbg) Non-zero exit: out/minikube-linux-amd64 stop -p addons-435364: exit status 82 (2m0.462500418s)

                                                
                                                
-- stdout --
	* Stopping node "addons-435364"  ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_STOP_TIMEOUT: Unable to stop VM: Temporary Error: stop: unable to stop vm, current state "Running"
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_stop_24ef9d461bcadf056806ccb9bba8d5f9f54754a6_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
addons_test.go:176: failed to stop minikube. args "out/minikube-linux-amd64 stop -p addons-435364" : exit status 82
addons_test.go:178: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p addons-435364
addons_test.go:178: (dbg) Non-zero exit: out/minikube-linux-amd64 addons enable dashboard -p addons-435364: exit status 11 (21.629206736s)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_ADDON_ENABLE_PAUSED: enabled failed: check paused: list paused: crictl list: NewSession: new client: new client: dial tcp 192.168.39.129:22: connect: no route to host
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_a2d68fa011bbbda55500e636dff79fec124b29e3_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
addons_test.go:180: failed to enable dashboard addon: args "out/minikube-linux-amd64 addons enable dashboard -p addons-435364" : exit status 11
addons_test.go:182: (dbg) Run:  out/minikube-linux-amd64 addons disable dashboard -p addons-435364
addons_test.go:182: (dbg) Non-zero exit: out/minikube-linux-amd64 addons disable dashboard -p addons-435364: exit status 11 (6.143157146s)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: crictl list: NewSession: new client: new client: dial tcp 192.168.39.129:22: connect: no route to host
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_7b2045b3edf32de99b3c34afdc43bfaabe8aa3c2_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
addons_test.go:184: failed to disable dashboard addon: args "out/minikube-linux-amd64 addons disable dashboard -p addons-435364" : exit status 11
addons_test.go:187: (dbg) Run:  out/minikube-linux-amd64 addons disable gvisor -p addons-435364
addons_test.go:187: (dbg) Non-zero exit: out/minikube-linux-amd64 addons disable gvisor -p addons-435364: exit status 11 (6.143771165s)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: crictl list: NewSession: new client: new client: dial tcp 192.168.39.129:22: connect: no route to host
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_8dd43b2cee45a94e37dbac1dd983966d1c97e7d4_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
addons_test.go:189: failed to disable non-enabled addon: args "out/minikube-linux-amd64 addons disable gvisor -p addons-435364" : exit status 11
--- FAIL: TestAddons/StoppedEnableDisable (154.38s)

                                                
                                    
x
+
TestFunctional/parallel/PersistentVolumeClaim (189.1s)

                                                
                                                
=== RUN   TestFunctional/parallel/PersistentVolumeClaim
=== PAUSE TestFunctional/parallel/PersistentVolumeClaim

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PersistentVolumeClaim
functional_test_pvc_test.go:44: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 4m0s for pods matching "integration-test=storage-provisioner" in namespace "kube-system" ...
helpers_test.go:344: "storage-provisioner" [93a40ded-7e39-4c49-bb7d-ebf5b9a1376a] Running
functional_test_pvc_test.go:44: (dbg) TestFunctional/parallel/PersistentVolumeClaim: integration-test=storage-provisioner healthy within 5.004702579s
functional_test_pvc_test.go:49: (dbg) Run:  kubectl --context functional-299463 get storageclass -o=json
functional_test_pvc_test.go:69: (dbg) Run:  kubectl --context functional-299463 apply -f testdata/storage-provisioner/pvc.yaml
functional_test_pvc_test.go:76: (dbg) Run:  kubectl --context functional-299463 get pvc myclaim -o=json
functional_test_pvc_test.go:76: (dbg) Run:  kubectl --context functional-299463 get pvc myclaim -o=json
functional_test_pvc_test.go:125: (dbg) Run:  kubectl --context functional-299463 apply -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 3m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:344: "sp-pod" [7b23ce82-3224-483d-a8d3-83212d727057] Pending
helpers_test.go:344: "sp-pod" [7b23ce82-3224-483d-a8d3-83212d727057] Pending: PodScheduled:Unschedulable (0/1 nodes are available: persistentvolume "pvc-111e8d8d-7a8e-4ade-a6fc-02c8d79bb45c" not found. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.)
helpers_test.go:344: "sp-pod" [7b23ce82-3224-483d-a8d3-83212d727057] Pending: PodScheduled:Unschedulable (0/1 nodes are available: 1 node(s) unavailable due to one or more pvc(s) bound to non-existent pv(s). preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.)
helpers_test.go:329: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "default" "test=storage-provisioner" returned: client rate limiter Wait returned an error: context deadline exceeded
functional_test_pvc_test.go:130: ***** TestFunctional/parallel/PersistentVolumeClaim: pod "test=storage-provisioner" failed to start within 3m0s: context deadline exceeded ****
functional_test_pvc_test.go:130: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p functional-299463 -n functional-299463
functional_test_pvc_test.go:130: TestFunctional/parallel/PersistentVolumeClaim: showing logs for failed pods as of 2024-08-05 23:09:57.361404274 +0000 UTC m=+1334.773840784
functional_test_pvc_test.go:130: (dbg) Run:  kubectl --context functional-299463 describe po sp-pod -n default
functional_test_pvc_test.go:130: (dbg) kubectl --context functional-299463 describe po sp-pod -n default:
Name:             sp-pod
Namespace:        default
Priority:         0
Service Account:  default
Node:             <none>
Labels:           test=storage-provisioner
Annotations:      <none>
Status:           Pending
IP:               
IPs:              <none>
Containers:
myfrontend:
Image:        docker.io/nginx
Port:         <none>
Host Port:    <none>
Environment:  <none>
Mounts:
/tmp/mount from mypd (rw)
/var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-65n2z (ro)
Conditions:
Type           Status
PodScheduled   False 
Volumes:
mypd:
Type:       PersistentVolumeClaim (a reference to a PersistentVolumeClaim in the same namespace)
ClaimName:  myclaim
ReadOnly:   false
kube-api-access-65n2z:
Type:                    Projected (a volume that contains injected data from multiple sources)
TokenExpirationSeconds:  3607
ConfigMapName:           kube-root-ca.crt
ConfigMapOptional:       <nil>
DownwardAPI:             true
QoS Class:                   BestEffort
Node-Selectors:              <none>
Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
Events:
Type     Reason            Age    From               Message
----     ------            ----   ----               -------
Warning  FailedScheduling  2m30s  default-scheduler  0/1 nodes are available: persistentvolume "pvc-111e8d8d-7a8e-4ade-a6fc-02c8d79bb45c" not found. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.
Warning  FailedScheduling  2m25s  default-scheduler  0/1 nodes are available: 1 node(s) unavailable due to one or more pvc(s) bound to non-existent pv(s). preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.
functional_test_pvc_test.go:130: (dbg) Run:  kubectl --context functional-299463 logs sp-pod -n default
functional_test_pvc_test.go:130: (dbg) kubectl --context functional-299463 logs sp-pod -n default:
functional_test_pvc_test.go:131: failed waiting for pod: test=storage-provisioner within 3m0s: context deadline exceeded
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p functional-299463 -n functional-299463
helpers_test.go:244: <<< TestFunctional/parallel/PersistentVolumeClaim FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestFunctional/parallel/PersistentVolumeClaim]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p functional-299463 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p functional-299463 logs -n 25: (1.553801656s)
helpers_test.go:252: TestFunctional/parallel/PersistentVolumeClaim logs: 
-- stdout --
	
	==> Audit <==
	|----------------|--------------------------------------------------------------------------|-------------------|---------|---------|---------------------|---------------------|
	|    Command     |                                   Args                                   |      Profile      |  User   | Version |     Start Time      |      End Time       |
	|----------------|--------------------------------------------------------------------------|-------------------|---------|---------|---------------------|---------------------|
	| update-context | functional-299463                                                        | functional-299463 | jenkins | v1.33.1 | 05 Aug 24 23:07 UTC | 05 Aug 24 23:07 UTC |
	|                | update-context                                                           |                   |         |         |                     |                     |
	|                | --alsologtostderr -v=2                                                   |                   |         |         |                     |                     |
	| update-context | functional-299463                                                        | functional-299463 | jenkins | v1.33.1 | 05 Aug 24 23:07 UTC | 05 Aug 24 23:07 UTC |
	|                | update-context                                                           |                   |         |         |                     |                     |
	|                | --alsologtostderr -v=2                                                   |                   |         |         |                     |                     |
	| image          | functional-299463                                                        | functional-299463 | jenkins | v1.33.1 | 05 Aug 24 23:07 UTC | 05 Aug 24 23:07 UTC |
	|                | image ls --format short                                                  |                   |         |         |                     |                     |
	|                | --alsologtostderr                                                        |                   |         |         |                     |                     |
	| image          | functional-299463                                                        | functional-299463 | jenkins | v1.33.1 | 05 Aug 24 23:07 UTC | 05 Aug 24 23:07 UTC |
	|                | image ls --format yaml                                                   |                   |         |         |                     |                     |
	|                | --alsologtostderr                                                        |                   |         |         |                     |                     |
	| ssh            | functional-299463 ssh pgrep                                              | functional-299463 | jenkins | v1.33.1 | 05 Aug 24 23:07 UTC |                     |
	|                | buildkitd                                                                |                   |         |         |                     |                     |
	| image          | functional-299463 image build -t                                         | functional-299463 | jenkins | v1.33.1 | 05 Aug 24 23:07 UTC | 05 Aug 24 23:07 UTC |
	|                | localhost/my-image:functional-299463                                     |                   |         |         |                     |                     |
	|                | testdata/build --alsologtostderr                                         |                   |         |         |                     |                     |
	| ssh            | functional-299463 ssh stat                                               | functional-299463 | jenkins | v1.33.1 | 05 Aug 24 23:07 UTC | 05 Aug 24 23:07 UTC |
	|                | /mount-9p/created-by-test                                                |                   |         |         |                     |                     |
	| ssh            | functional-299463 ssh stat                                               | functional-299463 | jenkins | v1.33.1 | 05 Aug 24 23:07 UTC | 05 Aug 24 23:07 UTC |
	|                | /mount-9p/created-by-pod                                                 |                   |         |         |                     |                     |
	| ssh            | functional-299463 ssh sudo                                               | functional-299463 | jenkins | v1.33.1 | 05 Aug 24 23:07 UTC | 05 Aug 24 23:07 UTC |
	|                | umount -f /mount-9p                                                      |                   |         |         |                     |                     |
	| mount          | -p functional-299463                                                     | functional-299463 | jenkins | v1.33.1 | 05 Aug 24 23:07 UTC |                     |
	|                | /tmp/TestFunctionalparallelMountCmdspecific-port1026513075/001:/mount-9p |                   |         |         |                     |                     |
	|                | --alsologtostderr -v=1 --port 46464                                      |                   |         |         |                     |                     |
	| ssh            | functional-299463 ssh findmnt                                            | functional-299463 | jenkins | v1.33.1 | 05 Aug 24 23:07 UTC |                     |
	|                | -T /mount-9p | grep 9p                                                   |                   |         |         |                     |                     |
	| ssh            | functional-299463 ssh findmnt                                            | functional-299463 | jenkins | v1.33.1 | 05 Aug 24 23:07 UTC | 05 Aug 24 23:07 UTC |
	|                | -T /mount-9p | grep 9p                                                   |                   |         |         |                     |                     |
	| image          | functional-299463 image ls                                               | functional-299463 | jenkins | v1.33.1 | 05 Aug 24 23:07 UTC | 05 Aug 24 23:07 UTC |
	| ssh            | functional-299463 ssh -- ls                                              | functional-299463 | jenkins | v1.33.1 | 05 Aug 24 23:07 UTC | 05 Aug 24 23:07 UTC |
	|                | -la /mount-9p                                                            |                   |         |         |                     |                     |
	| image          | functional-299463                                                        | functional-299463 | jenkins | v1.33.1 | 05 Aug 24 23:07 UTC | 05 Aug 24 23:07 UTC |
	|                | image ls --format table                                                  |                   |         |         |                     |                     |
	|                | --alsologtostderr                                                        |                   |         |         |                     |                     |
	| image          | functional-299463                                                        | functional-299463 | jenkins | v1.33.1 | 05 Aug 24 23:07 UTC | 05 Aug 24 23:07 UTC |
	|                | image ls --format json                                                   |                   |         |         |                     |                     |
	|                | --alsologtostderr                                                        |                   |         |         |                     |                     |
	| ssh            | functional-299463 ssh sudo                                               | functional-299463 | jenkins | v1.33.1 | 05 Aug 24 23:07 UTC |                     |
	|                | umount -f /mount-9p                                                      |                   |         |         |                     |                     |
	| mount          | -p functional-299463                                                     | functional-299463 | jenkins | v1.33.1 | 05 Aug 24 23:07 UTC |                     |
	|                | /tmp/TestFunctionalparallelMountCmdVerifyCleanup78063072/001:/mount2     |                   |         |         |                     |                     |
	|                | --alsologtostderr -v=1                                                   |                   |         |         |                     |                     |
	| mount          | -p functional-299463                                                     | functional-299463 | jenkins | v1.33.1 | 05 Aug 24 23:07 UTC |                     |
	|                | /tmp/TestFunctionalparallelMountCmdVerifyCleanup78063072/001:/mount3     |                   |         |         |                     |                     |
	|                | --alsologtostderr -v=1                                                   |                   |         |         |                     |                     |
	| mount          | -p functional-299463                                                     | functional-299463 | jenkins | v1.33.1 | 05 Aug 24 23:07 UTC |                     |
	|                | /tmp/TestFunctionalparallelMountCmdVerifyCleanup78063072/001:/mount1     |                   |         |         |                     |                     |
	|                | --alsologtostderr -v=1                                                   |                   |         |         |                     |                     |
	| ssh            | functional-299463 ssh findmnt                                            | functional-299463 | jenkins | v1.33.1 | 05 Aug 24 23:07 UTC |                     |
	|                | -T /mount1                                                               |                   |         |         |                     |                     |
	| ssh            | functional-299463 ssh findmnt                                            | functional-299463 | jenkins | v1.33.1 | 05 Aug 24 23:07 UTC | 05 Aug 24 23:07 UTC |
	|                | -T /mount1                                                               |                   |         |         |                     |                     |
	| ssh            | functional-299463 ssh findmnt                                            | functional-299463 | jenkins | v1.33.1 | 05 Aug 24 23:07 UTC | 05 Aug 24 23:07 UTC |
	|                | -T /mount2                                                               |                   |         |         |                     |                     |
	| ssh            | functional-299463 ssh findmnt                                            | functional-299463 | jenkins | v1.33.1 | 05 Aug 24 23:07 UTC | 05 Aug 24 23:07 UTC |
	|                | -T /mount3                                                               |                   |         |         |                     |                     |
	| mount          | -p functional-299463                                                     | functional-299463 | jenkins | v1.33.1 | 05 Aug 24 23:08 UTC |                     |
	|                | --kill=true                                                              |                   |         |         |                     |                     |
	|----------------|--------------------------------------------------------------------------|-------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/08/05 23:07:51
	Running on machine: ubuntu-20-agent-10
	Binary: Built with gc go1.22.5 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0805 23:07:51.804085   27244 out.go:291] Setting OutFile to fd 1 ...
	I0805 23:07:51.804364   27244 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0805 23:07:51.804375   27244 out.go:304] Setting ErrFile to fd 2...
	I0805 23:07:51.804379   27244 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0805 23:07:51.804693   27244 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19373-9606/.minikube/bin
	I0805 23:07:51.805218   27244 out.go:298] Setting JSON to false
	I0805 23:07:51.806219   27244 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-10","uptime":3018,"bootTime":1722896254,"procs":213,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1065-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0805 23:07:51.806277   27244 start.go:139] virtualization: kvm guest
	I0805 23:07:51.808569   27244 out.go:177] * [functional-299463] minikube v1.33.1 sur Ubuntu 20.04 (kvm/amd64)
	I0805 23:07:51.810349   27244 out.go:177]   - MINIKUBE_LOCATION=19373
	I0805 23:07:51.810361   27244 notify.go:220] Checking for updates...
	I0805 23:07:51.813193   27244 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0805 23:07:51.814615   27244 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19373-9606/kubeconfig
	I0805 23:07:51.815984   27244 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19373-9606/.minikube
	I0805 23:07:51.817567   27244 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0805 23:07:51.819160   27244 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0805 23:07:51.821187   27244 config.go:182] Loaded profile config "functional-299463": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0805 23:07:51.821683   27244 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0805 23:07:51.821742   27244 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0805 23:07:51.836616   27244 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39099
	I0805 23:07:51.837075   27244 main.go:141] libmachine: () Calling .GetVersion
	I0805 23:07:51.837663   27244 main.go:141] libmachine: Using API Version  1
	I0805 23:07:51.837684   27244 main.go:141] libmachine: () Calling .SetConfigRaw
	I0805 23:07:51.838048   27244 main.go:141] libmachine: () Calling .GetMachineName
	I0805 23:07:51.838287   27244 main.go:141] libmachine: (functional-299463) Calling .DriverName
	I0805 23:07:51.838538   27244 driver.go:392] Setting default libvirt URI to qemu:///system
	I0805 23:07:51.838827   27244 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0805 23:07:51.838869   27244 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0805 23:07:51.854551   27244 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37043
	I0805 23:07:51.855244   27244 main.go:141] libmachine: () Calling .GetVersion
	I0805 23:07:51.855849   27244 main.go:141] libmachine: Using API Version  1
	I0805 23:07:51.855881   27244 main.go:141] libmachine: () Calling .SetConfigRaw
	I0805 23:07:51.856159   27244 main.go:141] libmachine: () Calling .GetMachineName
	I0805 23:07:51.856462   27244 main.go:141] libmachine: (functional-299463) Calling .DriverName
	I0805 23:07:51.889888   27244 out.go:177] * Utilisation du pilote kvm2 basé sur le profil existant
	I0805 23:07:51.891479   27244 start.go:297] selected driver: kvm2
	I0805 23:07:51.891499   27244 start.go:901] validating driver "kvm2" against &{Name:functional-299463 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19339/minikube-v1.33.1-1722248113-19339-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:4000 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kuber
netesVersion:v1.30.3 ClusterName:functional-299463 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.190 Port:8441 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mo
unt:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0805 23:07:51.891639   27244 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0805 23:07:51.893814   27244 out.go:177] 
	W0805 23:07:51.895271   27244 out.go:239] X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	I0805 23:07:51.896871   27244 out.go:177] 
	
	
	==> CRI-O <==
	Aug 05 23:09:58 functional-299463 crio[4826]: time="2024-08-05 23:09:58.207004909Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=ae33d843-73eb-4773-805e-ea621636a5d9 name=/runtime.v1.RuntimeService/Version
	Aug 05 23:09:58 functional-299463 crio[4826]: time="2024-08-05 23:09:58.208347800Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=f90d69ea-cbfd-4a88-8872-c1e7da7cf0a6 name=/runtime.v1.ImageService/ImageFsInfo
	Aug 05 23:09:58 functional-299463 crio[4826]: time="2024-08-05 23:09:58.209324187Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1722899398209294737,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:250737,},InodesUsed:&UInt64Value{Value:122,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=f90d69ea-cbfd-4a88-8872-c1e7da7cf0a6 name=/runtime.v1.ImageService/ImageFsInfo
	Aug 05 23:09:58 functional-299463 crio[4826]: time="2024-08-05 23:09:58.209957267Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=082ef09d-12bf-4f06-97e2-cd01b0cbd0b4 name=/runtime.v1.RuntimeService/ListContainers
	Aug 05 23:09:58 functional-299463 crio[4826]: time="2024-08-05 23:09:58.210013834Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=082ef09d-12bf-4f06-97e2-cd01b0cbd0b4 name=/runtime.v1.RuntimeService/ListContainers
	Aug 05 23:09:58 functional-299463 crio[4826]: time="2024-08-05 23:09:58.210387812Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:449e48ecf1865c8a1a6c6fe801b37b5c0a5a51627a302479bff784fb88c2ccd2,PodSandboxId:72c9576d5d0ca0c08e82c0768f4055c2e040552c8d4ec7a7f9971c915206f469,Metadata:&ContainerMetadata{Name:dashboard-metrics-scraper,Attempt:0,},Image:&ImageSpec{Image:docker.io/kubernetesui/metrics-scraper@sha256:43227e8286fd379ee0415a5e2156a9439c4056807e3caa38e1dd413b0644807a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:115053965e86b2df4d78af78d7951b8644839d20a03820c6df59a261103315f7,State:CONTAINER_RUNNING,CreatedAt:1722899285929043098,Labels:map[string]string{io.kubernetes.container.name: dashboard-metrics-scraper,io.kubernetes.pod.name: dashboard-metrics-scraper-b5fc48f67-7cgnf,io.kubernetes.pod.namespace: kubernetes-dashboard,io.kubernetes.pod.uid: 0decaeed-b26f-4dff-bdb3-f5bf8ff0f09f,},Annotations:map[string]string{io.kube
rnetes.container.hash: f4dc3aea,io.kubernetes.container.ports: [{\"containerPort\":8000,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:67f39af54dc24ccf4589e247873d596e93a6cef19dc58ea160fe44607a495560,PodSandboxId:ea29386bcdbc47b014da7204fbb81996d75501044fbfa6f052b094d9b4f53e65,Metadata:&ContainerMetadata{Name:kubernetes-dashboard,Attempt:0,},Image:&ImageSpec{Image:docker.io/kubernetesui/dashboard@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:07655ddf2eebe5d250f7a72c25f638b27126805d61779741b4e62e69ba080558,State:CONTAINER_RUNNING,CreatedAt:1722899282229468090,Labels:map[string]string{io.kubernetes.container.name: kubernetes-dashboard,io.kubernetes.pod.name: kubernetes-dashboard-779776cb65-xzpg6,io.kubernetes
.pod.namespace: kubernetes-dashboard,io.kubernetes.pod.uid: 8f477a33-cee8-48d7-8b33-9779a2323a9f,},Annotations:map[string]string{io.kubernetes.container.hash: 4c4275b5,io.kubernetes.container.ports: [{\"containerPort\":9090,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:43259b8b6a9155da35d76a07687b7f685640c3a4bf4926d7589ddcc58c56976f,PodSandboxId:d4b608b8e98bfaa672afadf8a0ef32fa44d72a8df54ea13687df4313623146ee,Metadata:&ContainerMetadata{Name:mount-munger,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c,State:CONTAINER_EXITED,CreatedAt:1722899273299186859,Labels:map[string]string{io.k
ubernetes.container.name: mount-munger,io.kubernetes.pod.name: busybox-mount,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: e36767a8-7330-4158-842f-8949ab1a0d95,},Annotations:map[string]string{io.kubernetes.container.hash: 852802ca,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5034d1a7d4a45ec98dd9a048dae6f70e6cabc48f85fb3f53d4a1ed6ccbc36d01,PodSandboxId:975991fe739df26e0ba15070598e6e7bb2083565c3602ece609d2affe795457f,Metadata:&ContainerMetadata{Name:echoserver,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/echoserver@sha256:cb3386f863f6a4b05f33c191361723f9d5927ac287463b1bea633bf859475969,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:82e4c8a736a4fcf22b5ef9f6a4ff6207064c7187d7694bf97bd561605a538410,State:CONTAINER_RUNNING,CreatedAt:1722899263765197194,Labels:map[string]string{io.kuber
netes.container.name: echoserver,io.kubernetes.pod.name: hello-node-connect-57b4589c47-pb8sx,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 5ba59c63-9a77-49dc-b580-b4cf2136a81e,},Annotations:map[string]string{io.kubernetes.container.hash: 9dab4ebd,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:cf3ddab3a460b10d461a9bef233ceb39dcec06ce6c6e1753793e8d2abccfe913,PodSandboxId:74c905298e75c6529f4bbd71ee0414873ff2e62c0236db8da11b3dce779f13f9,Metadata:&ContainerMetadata{Name:echoserver,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/echoserver@sha256:cb3386f863f6a4b05f33c191361723f9d5927ac287463b1bea633bf859475969,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:82e4c8a736a4fcf22b5ef9f6a4ff6207064c7187d7694bf97bd561605a538410,State:CONTAINER_RUNNING,CreatedAt:1722899263659155897,Labels:map[string
]string{io.kubernetes.container.name: echoserver,io.kubernetes.pod.name: hello-node-6d85cfcfd8-bvls2,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 545b8cbe-5efb-4d9c-bcff-6fa15814afeb,},Annotations:map[string]string{io.kubernetes.container.hash: 94070ccf,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6a5aed1b8483a1d4d9372d447edb0e31f64b06ce34104c4cbcefc96712d9552a,PodSandboxId:69bbbfecd380ee8dca40f3ab21904d03affbf228aa035d38360503d54adf4c4e,Metadata:&ContainerMetadata{Name:mysql,Attempt:0,},Image:&ImageSpec{Image:docker.io/library/mysql@sha256:4bc6bc963e6d8443453676cae56536f4b8156d78bae03c0145cbe47c2aad73bb,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:5107333e08a87b836d48ff7528b1e84b9c86781cc9f1748bbc1b8c42a870d933,State:CONTAINER_RUNNING,CreatedAt:1722899259778287974,Labels:map[string
]string{io.kubernetes.container.name: mysql,io.kubernetes.pod.name: mysql-64454c8b5c-2272b,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 97b31bc8-211e-4919-bf6e-5b6f10bdb0bc,},Annotations:map[string]string{io.kubernetes.container.hash: 3f52aa18,io.kubernetes.container.ports: [{\"name\":\"mysql\",\"containerPort\":3306,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:52cb4fb0ea6086aeacb1791efb6ed966ed056f7760ce10261bf8f971d0aab296,PodSandboxId:92d4fe21f5f2e11933892689fd11040e75569b393f9afcbd51773737cd82e3e2,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,State
:CONTAINER_RUNNING,CreatedAt:1722899188574471204,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-functional-299463,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a3b2566ba423864dc7d692b6e91b80dc,},Annotations:map[string]string{io.kubernetes.container.hash: 82d9e66c,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:aec549fbcbee0fb35b8eb4a7505b9d9a2d2481db92401e43e26bd2c0a7e824e9,PodSandboxId:208337df63397bf81ab4acd525dca0b62f6b381f811a7f069a0c833ae5d07b9f,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:4,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAIN
ER_RUNNING,CreatedAt:1722899166736319686,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 93a40ded-7e39-4c49-bb7d-ebf5b9a1376a,},Annotations:map[string]string{io.kubernetes.container.hash: b3c4de78,io.kubernetes.container.restartCount: 4,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:14fa5fcc0776ac19d40e962d58b8fac9f3c423d61800193586b1c6c8082ff98b,PodSandboxId:92d4fe21f5f2e11933892689fd11040e75569b393f9afcbd51773737cd82e3e2,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,State:CONTAINER_EXITED,Created
At:1722899166835562461,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-functional-299463,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a3b2566ba423864dc7d692b6e91b80dc,},Annotations:map[string]string{io.kubernetes.container.hash: 82d9e66c,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3b7a6329469a8daa1f0052586315c5f052fb9ad4c9f229d30a4cd4f43fddb451,PodSandboxId:d7e7a284d8b823ce3065f99458edfd97739b1b59173a558f7ed0c44227163f94,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:3,},Image:&ImageSpec{Image:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,State:CONTAINER_RUNNING,CreatedAt:17228991667280
52399,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-wrmjf,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ab3ac050-cb0b-41b9-8bfb-da199c6555ac,},Annotations:map[string]string{io.kubernetes.container.hash: c32db193,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f3da8fa227bfc69449cda034d87fc4d634ec4cf63611679c9a25d7680bd18263,PodSandboxId:296620a86eef8adf142cca5283900fc1d86fb82a16683f4184a52970d6536324,Metadata:&ContainerMetadata{Name:coredns,Attempt:3,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1722899166740444231,Labels:map[string]string{io.ku
bernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-9hm54,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b78d97c0-e51e-4505-a099-a6c9bb76e303,},Annotations:map[string]string{io.kubernetes.container.hash: 44313478,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:49751aba72eec98d1612da22fd81450c446cc6aec3f32fbc741b9df28f56d447,PodSandboxId:3ad426cf1b4060363da01ccf2f1db27b9312348c3b053daf4723bee2878cf6b9,Metadata:&ContainerMetadata{Name:etcd,Attempt:3,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},User
SpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1722899161336840314,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-functional-299463,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8e297d9f4cd386cf2596e000d17804fa,},Annotations:map[string]string{io.kubernetes.container.hash: b7bf53ca,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:28788bc4ca8db465ec428c389ce83400231592ff8d617aecf0326d829b97361b,PodSandboxId:ffc157f67e20c0630718d248e362ce4f77bb4d62a64e9efb227f9895cc4568ed,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:3,},Image:&ImageSpec{Image:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,}
,ImageRef:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,State:CONTAINER_RUNNING,CreatedAt:1722899154857966830,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-functional-299463,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 77d10334c905c6af7f9e4a41c2593db0,},Annotations:map[string]string{io.kubernetes.container.hash: 7337c8d9,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e04b5c8429e3c806ce7e14a70a8c90816ec5add9f2ba0c67f5cc57dc598fd7dc,PodSandboxId:c5245291d0b066a9037e050469b905c9d8e4407606a2153c3c25678bc03b878f,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:3,},Image:&ImageSpec{Image:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},Ima
geRef:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,State:CONTAINER_RUNNING,CreatedAt:1722899154845562890,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-functional-299463,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1d7726436fca7eafaad35475c9d8f8ee,},Annotations:map[string]string{io.kubernetes.container.hash: bcb4918f,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:15bbcd193c2151deb1353a7e11a5831639c9909f652cf87ee7cf5d3ddb012707,PodSandboxId:296620a86eef8adf142cca5283900fc1d86fb82a16683f4184a52970d6536324,Metadata:&ContainerMetadata{Name:coredns,Attempt:2,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},Image
Ref:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1722899153516377303,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-9hm54,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b78d97c0-e51e-4505-a099-a6c9bb76e303,},Annotations:map[string]string{io.kubernetes.container.hash: 44313478,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ba33cf41bc99a98eeb2f4f77ecd320e237c376af51933411e5636a965e756797,PodSandboxId:d7e7a284d8b823ce3065f99458edfd97739b1b59173a558f7ed0c44227163f94,Metadata:&ContainerMet
adata{Name:kube-proxy,Attempt:2,},Image:&ImageSpec{Image:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,State:CONTAINER_EXITED,CreatedAt:1722899153113501458,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-wrmjf,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ab3ac050-cb0b-41b9-8bfb-da199c6555ac,},Annotations:map[string]string{io.kubernetes.container.hash: c32db193,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:948c1e01ffb6bf1cc5792dc9787ca1dde3c7df35251b8eec31b4dcdc24332cf6,PodSandboxId:cd445aa024fb14d5195a5c8e9d09a5890a9ec7c8f57a69ab5e1b6989bad4faed,Metadata:&ContainerMetadata{Name:storage-provisioner,Att
empt:3,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1722899116671372749,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 93a40ded-7e39-4c49-bb7d-ebf5b9a1376a,},Annotations:map[string]string{io.kubernetes.container.hash: b3c4de78,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c4851c0f61cd98c1555b57deca59545b1355fee71e3b42a82ef298d1e97f9acd,PodSandboxId:f6a51349696603fbc8b4e0eb3030eb2c424180116decf206f69abf08f5c21f2d,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Im
age:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_EXITED,CreatedAt:1722899112940822939,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-functional-299463,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8e297d9f4cd386cf2596e000d17804fa,},Annotations:map[string]string{io.kubernetes.container.hash: b7bf53ca,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:84ed7a9ec1281dc1a43f723c40a7a6c52fc9a9f9f4f426954587e8882775fb0a,PodSandboxId:2d2e0bc9db8b7231417a2df11bdd19ea7eed0e01c6fdc9530427748ce5acac8f,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:3edc18e7b76722eb2eb37a0858c09c
aacbd422d6e0cae4c2e5ce67bc9a9795e2,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,State:CONTAINER_EXITED,CreatedAt:1722899112907458931,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-functional-299463,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 77d10334c905c6af7f9e4a41c2593db0,},Annotations:map[string]string{io.kubernetes.container.hash: 7337c8d9,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8112526f29a49da40c17b7003db2ebb0d40cdfb8f7b3ab3c2500879640a9e79b,PodSandboxId:abad861d44749a125b35c34f19346a7b124807ef92a9ca21543884e4498b2689,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:76932a3b37d7eb138c8f47c9a2b4218f046
6dd273badf856f2ce2f0277e15b5e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,State:CONTAINER_EXITED,CreatedAt:1722899112916553910,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-functional-299463,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1d7726436fca7eafaad35475c9d8f8ee,},Annotations:map[string]string{io.kubernetes.container.hash: bcb4918f,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=082ef09d-12bf-4f06-97e2-cd01b0cbd0b4 name=/runtime.v1.RuntimeService/ListContainers
	Aug 05 23:09:58 functional-299463 crio[4826]: time="2024-08-05 23:09:58.246492804Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=aed1fd46-beeb-4314-8966-ae5d61abc1bb name=/runtime.v1.RuntimeService/Version
	Aug 05 23:09:58 functional-299463 crio[4826]: time="2024-08-05 23:09:58.246575826Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=aed1fd46-beeb-4314-8966-ae5d61abc1bb name=/runtime.v1.RuntimeService/Version
	Aug 05 23:09:58 functional-299463 crio[4826]: time="2024-08-05 23:09:58.247730123Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=5e8df802-d6c2-4ba1-837b-0208e143a4c0 name=/runtime.v1.ImageService/ImageFsInfo
	Aug 05 23:09:58 functional-299463 crio[4826]: time="2024-08-05 23:09:58.248746198Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1722899398248722816,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:250737,},InodesUsed:&UInt64Value{Value:122,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=5e8df802-d6c2-4ba1-837b-0208e143a4c0 name=/runtime.v1.ImageService/ImageFsInfo
	Aug 05 23:09:58 functional-299463 crio[4826]: time="2024-08-05 23:09:58.249255073Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=dc14af9e-c2e3-49b1-bd3a-7375e09d5b2e name=/runtime.v1.RuntimeService/ListContainers
	Aug 05 23:09:58 functional-299463 crio[4826]: time="2024-08-05 23:09:58.249309828Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=dc14af9e-c2e3-49b1-bd3a-7375e09d5b2e name=/runtime.v1.RuntimeService/ListContainers
	Aug 05 23:09:58 functional-299463 crio[4826]: time="2024-08-05 23:09:58.249796740Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:449e48ecf1865c8a1a6c6fe801b37b5c0a5a51627a302479bff784fb88c2ccd2,PodSandboxId:72c9576d5d0ca0c08e82c0768f4055c2e040552c8d4ec7a7f9971c915206f469,Metadata:&ContainerMetadata{Name:dashboard-metrics-scraper,Attempt:0,},Image:&ImageSpec{Image:docker.io/kubernetesui/metrics-scraper@sha256:43227e8286fd379ee0415a5e2156a9439c4056807e3caa38e1dd413b0644807a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:115053965e86b2df4d78af78d7951b8644839d20a03820c6df59a261103315f7,State:CONTAINER_RUNNING,CreatedAt:1722899285929043098,Labels:map[string]string{io.kubernetes.container.name: dashboard-metrics-scraper,io.kubernetes.pod.name: dashboard-metrics-scraper-b5fc48f67-7cgnf,io.kubernetes.pod.namespace: kubernetes-dashboard,io.kubernetes.pod.uid: 0decaeed-b26f-4dff-bdb3-f5bf8ff0f09f,},Annotations:map[string]string{io.kube
rnetes.container.hash: f4dc3aea,io.kubernetes.container.ports: [{\"containerPort\":8000,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:67f39af54dc24ccf4589e247873d596e93a6cef19dc58ea160fe44607a495560,PodSandboxId:ea29386bcdbc47b014da7204fbb81996d75501044fbfa6f052b094d9b4f53e65,Metadata:&ContainerMetadata{Name:kubernetes-dashboard,Attempt:0,},Image:&ImageSpec{Image:docker.io/kubernetesui/dashboard@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:07655ddf2eebe5d250f7a72c25f638b27126805d61779741b4e62e69ba080558,State:CONTAINER_RUNNING,CreatedAt:1722899282229468090,Labels:map[string]string{io.kubernetes.container.name: kubernetes-dashboard,io.kubernetes.pod.name: kubernetes-dashboard-779776cb65-xzpg6,io.kubernetes
.pod.namespace: kubernetes-dashboard,io.kubernetes.pod.uid: 8f477a33-cee8-48d7-8b33-9779a2323a9f,},Annotations:map[string]string{io.kubernetes.container.hash: 4c4275b5,io.kubernetes.container.ports: [{\"containerPort\":9090,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:43259b8b6a9155da35d76a07687b7f685640c3a4bf4926d7589ddcc58c56976f,PodSandboxId:d4b608b8e98bfaa672afadf8a0ef32fa44d72a8df54ea13687df4313623146ee,Metadata:&ContainerMetadata{Name:mount-munger,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c,State:CONTAINER_EXITED,CreatedAt:1722899273299186859,Labels:map[string]string{io.k
ubernetes.container.name: mount-munger,io.kubernetes.pod.name: busybox-mount,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: e36767a8-7330-4158-842f-8949ab1a0d95,},Annotations:map[string]string{io.kubernetes.container.hash: 852802ca,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5034d1a7d4a45ec98dd9a048dae6f70e6cabc48f85fb3f53d4a1ed6ccbc36d01,PodSandboxId:975991fe739df26e0ba15070598e6e7bb2083565c3602ece609d2affe795457f,Metadata:&ContainerMetadata{Name:echoserver,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/echoserver@sha256:cb3386f863f6a4b05f33c191361723f9d5927ac287463b1bea633bf859475969,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:82e4c8a736a4fcf22b5ef9f6a4ff6207064c7187d7694bf97bd561605a538410,State:CONTAINER_RUNNING,CreatedAt:1722899263765197194,Labels:map[string]string{io.kuber
netes.container.name: echoserver,io.kubernetes.pod.name: hello-node-connect-57b4589c47-pb8sx,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 5ba59c63-9a77-49dc-b580-b4cf2136a81e,},Annotations:map[string]string{io.kubernetes.container.hash: 9dab4ebd,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:cf3ddab3a460b10d461a9bef233ceb39dcec06ce6c6e1753793e8d2abccfe913,PodSandboxId:74c905298e75c6529f4bbd71ee0414873ff2e62c0236db8da11b3dce779f13f9,Metadata:&ContainerMetadata{Name:echoserver,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/echoserver@sha256:cb3386f863f6a4b05f33c191361723f9d5927ac287463b1bea633bf859475969,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:82e4c8a736a4fcf22b5ef9f6a4ff6207064c7187d7694bf97bd561605a538410,State:CONTAINER_RUNNING,CreatedAt:1722899263659155897,Labels:map[string
]string{io.kubernetes.container.name: echoserver,io.kubernetes.pod.name: hello-node-6d85cfcfd8-bvls2,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 545b8cbe-5efb-4d9c-bcff-6fa15814afeb,},Annotations:map[string]string{io.kubernetes.container.hash: 94070ccf,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6a5aed1b8483a1d4d9372d447edb0e31f64b06ce34104c4cbcefc96712d9552a,PodSandboxId:69bbbfecd380ee8dca40f3ab21904d03affbf228aa035d38360503d54adf4c4e,Metadata:&ContainerMetadata{Name:mysql,Attempt:0,},Image:&ImageSpec{Image:docker.io/library/mysql@sha256:4bc6bc963e6d8443453676cae56536f4b8156d78bae03c0145cbe47c2aad73bb,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:5107333e08a87b836d48ff7528b1e84b9c86781cc9f1748bbc1b8c42a870d933,State:CONTAINER_RUNNING,CreatedAt:1722899259778287974,Labels:map[string
]string{io.kubernetes.container.name: mysql,io.kubernetes.pod.name: mysql-64454c8b5c-2272b,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 97b31bc8-211e-4919-bf6e-5b6f10bdb0bc,},Annotations:map[string]string{io.kubernetes.container.hash: 3f52aa18,io.kubernetes.container.ports: [{\"name\":\"mysql\",\"containerPort\":3306,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:52cb4fb0ea6086aeacb1791efb6ed966ed056f7760ce10261bf8f971d0aab296,PodSandboxId:92d4fe21f5f2e11933892689fd11040e75569b393f9afcbd51773737cd82e3e2,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,State
:CONTAINER_RUNNING,CreatedAt:1722899188574471204,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-functional-299463,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a3b2566ba423864dc7d692b6e91b80dc,},Annotations:map[string]string{io.kubernetes.container.hash: 82d9e66c,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:aec549fbcbee0fb35b8eb4a7505b9d9a2d2481db92401e43e26bd2c0a7e824e9,PodSandboxId:208337df63397bf81ab4acd525dca0b62f6b381f811a7f069a0c833ae5d07b9f,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:4,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAIN
ER_RUNNING,CreatedAt:1722899166736319686,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 93a40ded-7e39-4c49-bb7d-ebf5b9a1376a,},Annotations:map[string]string{io.kubernetes.container.hash: b3c4de78,io.kubernetes.container.restartCount: 4,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:14fa5fcc0776ac19d40e962d58b8fac9f3c423d61800193586b1c6c8082ff98b,PodSandboxId:92d4fe21f5f2e11933892689fd11040e75569b393f9afcbd51773737cd82e3e2,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,State:CONTAINER_EXITED,Created
At:1722899166835562461,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-functional-299463,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a3b2566ba423864dc7d692b6e91b80dc,},Annotations:map[string]string{io.kubernetes.container.hash: 82d9e66c,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3b7a6329469a8daa1f0052586315c5f052fb9ad4c9f229d30a4cd4f43fddb451,PodSandboxId:d7e7a284d8b823ce3065f99458edfd97739b1b59173a558f7ed0c44227163f94,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:3,},Image:&ImageSpec{Image:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,State:CONTAINER_RUNNING,CreatedAt:17228991667280
52399,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-wrmjf,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ab3ac050-cb0b-41b9-8bfb-da199c6555ac,},Annotations:map[string]string{io.kubernetes.container.hash: c32db193,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f3da8fa227bfc69449cda034d87fc4d634ec4cf63611679c9a25d7680bd18263,PodSandboxId:296620a86eef8adf142cca5283900fc1d86fb82a16683f4184a52970d6536324,Metadata:&ContainerMetadata{Name:coredns,Attempt:3,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1722899166740444231,Labels:map[string]string{io.ku
bernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-9hm54,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b78d97c0-e51e-4505-a099-a6c9bb76e303,},Annotations:map[string]string{io.kubernetes.container.hash: 44313478,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:49751aba72eec98d1612da22fd81450c446cc6aec3f32fbc741b9df28f56d447,PodSandboxId:3ad426cf1b4060363da01ccf2f1db27b9312348c3b053daf4723bee2878cf6b9,Metadata:&ContainerMetadata{Name:etcd,Attempt:3,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},User
SpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1722899161336840314,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-functional-299463,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8e297d9f4cd386cf2596e000d17804fa,},Annotations:map[string]string{io.kubernetes.container.hash: b7bf53ca,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:28788bc4ca8db465ec428c389ce83400231592ff8d617aecf0326d829b97361b,PodSandboxId:ffc157f67e20c0630718d248e362ce4f77bb4d62a64e9efb227f9895cc4568ed,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:3,},Image:&ImageSpec{Image:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,}
,ImageRef:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,State:CONTAINER_RUNNING,CreatedAt:1722899154857966830,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-functional-299463,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 77d10334c905c6af7f9e4a41c2593db0,},Annotations:map[string]string{io.kubernetes.container.hash: 7337c8d9,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e04b5c8429e3c806ce7e14a70a8c90816ec5add9f2ba0c67f5cc57dc598fd7dc,PodSandboxId:c5245291d0b066a9037e050469b905c9d8e4407606a2153c3c25678bc03b878f,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:3,},Image:&ImageSpec{Image:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},Ima
geRef:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,State:CONTAINER_RUNNING,CreatedAt:1722899154845562890,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-functional-299463,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1d7726436fca7eafaad35475c9d8f8ee,},Annotations:map[string]string{io.kubernetes.container.hash: bcb4918f,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:15bbcd193c2151deb1353a7e11a5831639c9909f652cf87ee7cf5d3ddb012707,PodSandboxId:296620a86eef8adf142cca5283900fc1d86fb82a16683f4184a52970d6536324,Metadata:&ContainerMetadata{Name:coredns,Attempt:2,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},Image
Ref:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1722899153516377303,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-9hm54,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b78d97c0-e51e-4505-a099-a6c9bb76e303,},Annotations:map[string]string{io.kubernetes.container.hash: 44313478,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ba33cf41bc99a98eeb2f4f77ecd320e237c376af51933411e5636a965e756797,PodSandboxId:d7e7a284d8b823ce3065f99458edfd97739b1b59173a558f7ed0c44227163f94,Metadata:&ContainerMet
adata{Name:kube-proxy,Attempt:2,},Image:&ImageSpec{Image:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,State:CONTAINER_EXITED,CreatedAt:1722899153113501458,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-wrmjf,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ab3ac050-cb0b-41b9-8bfb-da199c6555ac,},Annotations:map[string]string{io.kubernetes.container.hash: c32db193,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:948c1e01ffb6bf1cc5792dc9787ca1dde3c7df35251b8eec31b4dcdc24332cf6,PodSandboxId:cd445aa024fb14d5195a5c8e9d09a5890a9ec7c8f57a69ab5e1b6989bad4faed,Metadata:&ContainerMetadata{Name:storage-provisioner,Att
empt:3,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1722899116671372749,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 93a40ded-7e39-4c49-bb7d-ebf5b9a1376a,},Annotations:map[string]string{io.kubernetes.container.hash: b3c4de78,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c4851c0f61cd98c1555b57deca59545b1355fee71e3b42a82ef298d1e97f9acd,PodSandboxId:f6a51349696603fbc8b4e0eb3030eb2c424180116decf206f69abf08f5c21f2d,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Im
age:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_EXITED,CreatedAt:1722899112940822939,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-functional-299463,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8e297d9f4cd386cf2596e000d17804fa,},Annotations:map[string]string{io.kubernetes.container.hash: b7bf53ca,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:84ed7a9ec1281dc1a43f723c40a7a6c52fc9a9f9f4f426954587e8882775fb0a,PodSandboxId:2d2e0bc9db8b7231417a2df11bdd19ea7eed0e01c6fdc9530427748ce5acac8f,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:3edc18e7b76722eb2eb37a0858c09c
aacbd422d6e0cae4c2e5ce67bc9a9795e2,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,State:CONTAINER_EXITED,CreatedAt:1722899112907458931,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-functional-299463,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 77d10334c905c6af7f9e4a41c2593db0,},Annotations:map[string]string{io.kubernetes.container.hash: 7337c8d9,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8112526f29a49da40c17b7003db2ebb0d40cdfb8f7b3ab3c2500879640a9e79b,PodSandboxId:abad861d44749a125b35c34f19346a7b124807ef92a9ca21543884e4498b2689,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:76932a3b37d7eb138c8f47c9a2b4218f046
6dd273badf856f2ce2f0277e15b5e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,State:CONTAINER_EXITED,CreatedAt:1722899112916553910,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-functional-299463,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1d7726436fca7eafaad35475c9d8f8ee,},Annotations:map[string]string{io.kubernetes.container.hash: bcb4918f,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=dc14af9e-c2e3-49b1-bd3a-7375e09d5b2e name=/runtime.v1.RuntimeService/ListContainers
	Aug 05 23:09:58 functional-299463 crio[4826]: time="2024-08-05 23:09:58.270314664Z" level=debug msg="Request: &ListPodSandboxRequest{Filter:nil,}" file="otel-collector/interceptors.go:62" id=83c68779-6ed6-499a-9fbc-75d58d09ae9b name=/runtime.v1.RuntimeService/ListPodSandbox
	Aug 05 23:09:58 functional-299463 crio[4826]: time="2024-08-05 23:09:58.270799766Z" level=debug msg="Response: &ListPodSandboxResponse{Items:[]*PodSandbox{&PodSandbox{Id:72c9576d5d0ca0c08e82c0768f4055c2e040552c8d4ec7a7f9971c915206f469,Metadata:&PodSandboxMetadata{Name:dashboard-metrics-scraper-b5fc48f67-7cgnf,Uid:0decaeed-b26f-4dff-bdb3-f5bf8ff0f09f,Namespace:kubernetes-dashboard,Attempt:0,},State:SANDBOX_READY,CreatedAt:1722899275695115132,Labels:map[string]string{io.kubernetes.container.name: POD,io.kubernetes.pod.name: dashboard-metrics-scraper-b5fc48f67-7cgnf,io.kubernetes.pod.namespace: kubernetes-dashboard,io.kubernetes.pod.uid: 0decaeed-b26f-4dff-bdb3-f5bf8ff0f09f,k8s-app: dashboard-metrics-scraper,pod-template-hash: b5fc48f67,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-08-05T23:07:53.581429677Z,kubernetes.io/config.source: api,seccomp.security.alpha.kubernetes.io/pod: runtime/default,},RuntimeHandler:,},&PodSandbox{Id:ea29386bcdbc47b014da7204fbb81996d75501044fbfa6f052b094d
9b4f53e65,Metadata:&PodSandboxMetadata{Name:kubernetes-dashboard-779776cb65-xzpg6,Uid:8f477a33-cee8-48d7-8b33-9779a2323a9f,Namespace:kubernetes-dashboard,Attempt:0,},State:SANDBOX_READY,CreatedAt:1722899275329257395,Labels:map[string]string{gcp-auth-skip-secret: true,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kubernetes-dashboard-779776cb65-xzpg6,io.kubernetes.pod.namespace: kubernetes-dashboard,io.kubernetes.pod.uid: 8f477a33-cee8-48d7-8b33-9779a2323a9f,k8s-app: kubernetes-dashboard,pod-template-hash: 779776cb65,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-08-05T23:07:53.519901318Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:d4b608b8e98bfaa672afadf8a0ef32fa44d72a8df54ea13687df4313623146ee,Metadata:&PodSandboxMetadata{Name:busybox-mount,Uid:e36767a8-7330-4158-842f-8949ab1a0d95,Namespace:default,Attempt:0,},State:SANDBOX_NOTREADY,CreatedAt:1722899270114320927,Labels:map[string]string{integration-test: busybox-mount,io.kubernetes.container.name: POD,io.k
ubernetes.pod.name: busybox-mount,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: e36767a8-7330-4158-842f-8949ab1a0d95,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-08-05T23:07:49.804147522Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:74c905298e75c6529f4bbd71ee0414873ff2e62c0236db8da11b3dce779f13f9,Metadata:&PodSandboxMetadata{Name:hello-node-6d85cfcfd8-bvls2,Uid:545b8cbe-5efb-4d9c-bcff-6fa15814afeb,Namespace:default,Attempt:0,},State:SANDBOX_READY,CreatedAt:1722899247396436423,Labels:map[string]string{app: hello-node,io.kubernetes.container.name: POD,io.kubernetes.pod.name: hello-node-6d85cfcfd8-bvls2,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 545b8cbe-5efb-4d9c-bcff-6fa15814afeb,pod-template-hash: 6d85cfcfd8,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-08-05T23:07:27.074756982Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:69bbbfecd380ee8dca40f3ab21904d03affbf228aa035d38360503d54adf4c4e,Metadata:&
PodSandboxMetadata{Name:mysql-64454c8b5c-2272b,Uid:97b31bc8-211e-4919-bf6e-5b6f10bdb0bc,Namespace:default,Attempt:0,},State:SANDBOX_READY,CreatedAt:1722899247395315599,Labels:map[string]string{app: mysql,io.kubernetes.container.name: POD,io.kubernetes.pod.name: mysql-64454c8b5c-2272b,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 97b31bc8-211e-4919-bf6e-5b6f10bdb0bc,pod-template-hash: 64454c8b5c,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-08-05T23:07:27.078555425Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:975991fe739df26e0ba15070598e6e7bb2083565c3602ece609d2affe795457f,Metadata:&PodSandboxMetadata{Name:hello-node-connect-57b4589c47-pb8sx,Uid:5ba59c63-9a77-49dc-b580-b4cf2136a81e,Namespace:default,Attempt:0,},State:SANDBOX_READY,CreatedAt:1722899247379901045,Labels:map[string]string{app: hello-node-connect,io.kubernetes.container.name: POD,io.kubernetes.pod.name: hello-node-connect-57b4589c47-pb8sx,io.kubernetes.pod.namespace: default,io.kubernetes.pod.
uid: 5ba59c63-9a77-49dc-b580-b4cf2136a81e,pod-template-hash: 57b4589c47,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-08-05T23:07:27.068631120Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:92d4fe21f5f2e11933892689fd11040e75569b393f9afcbd51773737cd82e3e2,Metadata:&PodSandboxMetadata{Name:kube-apiserver-functional-299463,Uid:a3b2566ba423864dc7d692b6e91b80dc,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1722899166042694314,Labels:map[string]string{component: kube-apiserver,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-apiserver-functional-299463,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a3b2566ba423864dc7d692b6e91b80dc,tier: control-plane,},Annotations:map[string]string{kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint: 192.168.39.190:8441,kubernetes.io/config.hash: a3b2566ba423864dc7d692b6e91b80dc,kubernetes.io/config.seen: 2024-08-05T23:06:05.399500243Z,kubernetes.io/config.source: file,},RuntimeHand
ler:,},&PodSandbox{Id:296620a86eef8adf142cca5283900fc1d86fb82a16683f4184a52970d6536324,Metadata:&PodSandboxMetadata{Name:coredns-7db6d8ff4d-9hm54,Uid:b78d97c0-e51e-4505-a099-a6c9bb76e303,Namespace:kube-system,Attempt:2,},State:SANDBOX_READY,CreatedAt:1722899153093978609,Labels:map[string]string{io.kubernetes.container.name: POD,io.kubernetes.pod.name: coredns-7db6d8ff4d-9hm54,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b78d97c0-e51e-4505-a099-a6c9bb76e303,k8s-app: kube-dns,pod-template-hash: 7db6d8ff4d,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-08-05T23:05:16.356364360Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:c5245291d0b066a9037e050469b905c9d8e4407606a2153c3c25678bc03b878f,Metadata:&PodSandboxMetadata{Name:kube-controller-manager-functional-299463,Uid:1d7726436fca7eafaad35475c9d8f8ee,Namespace:kube-system,Attempt:2,},State:SANDBOX_READY,CreatedAt:1722899152911586108,Labels:map[string]string{component: kube-controller-manager,io.kubernetes.co
ntainer.name: POD,io.kubernetes.pod.name: kube-controller-manager-functional-299463,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1d7726436fca7eafaad35475c9d8f8ee,tier: control-plane,},Annotations:map[string]string{kubernetes.io/config.hash: 1d7726436fca7eafaad35475c9d8f8ee,kubernetes.io/config.seen: 2024-08-05T23:05:12.360569457Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:ffc157f67e20c0630718d248e362ce4f77bb4d62a64e9efb227f9895cc4568ed,Metadata:&PodSandboxMetadata{Name:kube-scheduler-functional-299463,Uid:77d10334c905c6af7f9e4a41c2593db0,Namespace:kube-system,Attempt:2,},State:SANDBOX_READY,CreatedAt:1722899152884594296,Labels:map[string]string{component: kube-scheduler,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-scheduler-functional-299463,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 77d10334c905c6af7f9e4a41c2593db0,tier: control-plane,},Annotations:map[string]string{kubernetes.io/config.hash: 77d10334c905c6af7f9e4a41c2593db0,
kubernetes.io/config.seen: 2024-08-05T23:05:12.360570520Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:3ad426cf1b4060363da01ccf2f1db27b9312348c3b053daf4723bee2878cf6b9,Metadata:&PodSandboxMetadata{Name:etcd-functional-299463,Uid:8e297d9f4cd386cf2596e000d17804fa,Namespace:kube-system,Attempt:2,},State:SANDBOX_READY,CreatedAt:1722899152866464481,Labels:map[string]string{component: etcd,io.kubernetes.container.name: POD,io.kubernetes.pod.name: etcd-functional-299463,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8e297d9f4cd386cf2596e000d17804fa,tier: control-plane,},Annotations:map[string]string{kubeadm.kubernetes.io/etcd.advertise-client-urls: https://192.168.39.190:2379,kubernetes.io/config.hash: 8e297d9f4cd386cf2596e000d17804fa,kubernetes.io/config.seen: 2024-08-05T23:05:12.360563559Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:208337df63397bf81ab4acd525dca0b62f6b381f811a7f069a0c833ae5d07b9f,Metadata:&PodSandboxMetadata{Name:storage-provisioner
,Uid:93a40ded-7e39-4c49-bb7d-ebf5b9a1376a,Namespace:kube-system,Attempt:2,},State:SANDBOX_READY,CreatedAt:1722899152787929868,Labels:map[string]string{addonmanager.kubernetes.io/mode: Reconcile,integration-test: storage-provisioner,io.kubernetes.container.name: POD,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 93a40ded-7e39-4c49-bb7d-ebf5b9a1376a,},Annotations:map[string]string{kubectl.kubernetes.io/last-applied-configuration: {\"apiVersion\":\"v1\",\"kind\":\"Pod\",\"metadata\":{\"annotations\":{},\"labels\":{\"addonmanager.kubernetes.io/mode\":\"Reconcile\",\"integration-test\":\"storage-provisioner\"},\"name\":\"storage-provisioner\",\"namespace\":\"kube-system\"},\"spec\":{\"containers\":[{\"command\":[\"/storage-provisioner\"],\"image\":\"gcr.io/k8s-minikube/storage-provisioner:v5\",\"imagePullPolicy\":\"IfNotPresent\",\"name\":\"storage-provisioner\",\"volumeMounts\":[{\"mountPath\":\"/tmp\",\"name\":\"tmp\"}]}],\"hostNetwork\":true,\"service
AccountName\":\"storage-provisioner\",\"volumes\":[{\"hostPath\":{\"path\":\"/tmp\",\"type\":\"Directory\"},\"name\":\"tmp\"}]}}\n,kubernetes.io/config.seen: 2024-08-05T23:05:16.356376980Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:d7e7a284d8b823ce3065f99458edfd97739b1b59173a558f7ed0c44227163f94,Metadata:&PodSandboxMetadata{Name:kube-proxy-wrmjf,Uid:ab3ac050-cb0b-41b9-8bfb-da199c6555ac,Namespace:kube-system,Attempt:2,},State:SANDBOX_READY,CreatedAt:1722899152729756707,Labels:map[string]string{controller-revision-hash: 5bbc78d4f8,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-proxy-wrmjf,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ab3ac050-cb0b-41b9-8bfb-da199c6555ac,k8s-app: kube-proxy,pod-template-generation: 1,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-08-05T23:05:16.356373919Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:2d2e0bc9db8b7231417a2df11bdd19ea7eed0e01c6fdc9530427748ce5acac8f,Metadata:&PodSandb
oxMetadata{Name:kube-scheduler-functional-299463,Uid:77d10334c905c6af7f9e4a41c2593db0,Namespace:kube-system,Attempt:1,},State:SANDBOX_NOTREADY,CreatedAt:1722899108609608704,Labels:map[string]string{component: kube-scheduler,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-scheduler-functional-299463,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 77d10334c905c6af7f9e4a41c2593db0,tier: control-plane,},Annotations:map[string]string{kubernetes.io/config.hash: 77d10334c905c6af7f9e4a41c2593db0,kubernetes.io/config.seen: 2024-08-05T23:03:57.160753401Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:f6a51349696603fbc8b4e0eb3030eb2c424180116decf206f69abf08f5c21f2d,Metadata:&PodSandboxMetadata{Name:etcd-functional-299463,Uid:8e297d9f4cd386cf2596e000d17804fa,Namespace:kube-system,Attempt:1,},State:SANDBOX_NOTREADY,CreatedAt:1722899108597733312,Labels:map[string]string{component: etcd,io.kubernetes.container.name: POD,io.kubernetes.pod.name: etcd-functional-299463,io.ku
bernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8e297d9f4cd386cf2596e000d17804fa,tier: control-plane,},Annotations:map[string]string{kubeadm.kubernetes.io/etcd.advertise-client-urls: https://192.168.39.190:2379,kubernetes.io/config.hash: 8e297d9f4cd386cf2596e000d17804fa,kubernetes.io/config.seen: 2024-08-05T23:03:57.160747604Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:abad861d44749a125b35c34f19346a7b124807ef92a9ca21543884e4498b2689,Metadata:&PodSandboxMetadata{Name:kube-controller-manager-functional-299463,Uid:1d7726436fca7eafaad35475c9d8f8ee,Namespace:kube-system,Attempt:1,},State:SANDBOX_NOTREADY,CreatedAt:1722899108547381257,Labels:map[string]string{component: kube-controller-manager,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-controller-manager-functional-299463,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1d7726436fca7eafaad35475c9d8f8ee,tier: control-plane,},Annotations:map[string]string{kubernetes.io/config.hash: 1d7726436fca7e
afaad35475c9d8f8ee,kubernetes.io/config.seen: 2024-08-05T23:03:57.160752452Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:cd445aa024fb14d5195a5c8e9d09a5890a9ec7c8f57a69ab5e1b6989bad4faed,Metadata:&PodSandboxMetadata{Name:storage-provisioner,Uid:93a40ded-7e39-4c49-bb7d-ebf5b9a1376a,Namespace:kube-system,Attempt:1,},State:SANDBOX_NOTREADY,CreatedAt:1722899108533872593,Labels:map[string]string{addonmanager.kubernetes.io/mode: Reconcile,integration-test: storage-provisioner,io.kubernetes.container.name: POD,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 93a40ded-7e39-4c49-bb7d-ebf5b9a1376a,},Annotations:map[string]string{kubectl.kubernetes.io/last-applied-configuration: {\"apiVersion\":\"v1\",\"kind\":\"Pod\",\"metadata\":{\"annotations\":{},\"labels\":{\"addonmanager.kubernetes.io/mode\":\"Reconcile\",\"integration-test\":\"storage-provisioner\"},\"name\":\"storage-provisioner\",\"namespace\":\"kube-system\"},\"spec\":{\"contain
ers\":[{\"command\":[\"/storage-provisioner\"],\"image\":\"gcr.io/k8s-minikube/storage-provisioner:v5\",\"imagePullPolicy\":\"IfNotPresent\",\"name\":\"storage-provisioner\",\"volumeMounts\":[{\"mountPath\":\"/tmp\",\"name\":\"tmp\"}]}],\"hostNetwork\":true,\"serviceAccountName\":\"storage-provisioner\",\"volumes\":[{\"hostPath\":{\"path\":\"/tmp\",\"type\":\"Directory\"},\"name\":\"tmp\"}]}}\n,kubernetes.io/config.seen: 2024-08-05T23:04:12.031875810Z,kubernetes.io/config.source: api,},RuntimeHandler:,},},}" file="otel-collector/interceptors.go:74" id=83c68779-6ed6-499a-9fbc-75d58d09ae9b name=/runtime.v1.RuntimeService/ListPodSandbox
	Aug 05 23:09:58 functional-299463 crio[4826]: time="2024-08-05 23:09:58.271835577Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=48703074-53bd-43f1-bda0-c87b2ac491d3 name=/runtime.v1.RuntimeService/ListContainers
	Aug 05 23:09:58 functional-299463 crio[4826]: time="2024-08-05 23:09:58.271903481Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=48703074-53bd-43f1-bda0-c87b2ac491d3 name=/runtime.v1.RuntimeService/ListContainers
	Aug 05 23:09:58 functional-299463 crio[4826]: time="2024-08-05 23:09:58.272310548Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:449e48ecf1865c8a1a6c6fe801b37b5c0a5a51627a302479bff784fb88c2ccd2,PodSandboxId:72c9576d5d0ca0c08e82c0768f4055c2e040552c8d4ec7a7f9971c915206f469,Metadata:&ContainerMetadata{Name:dashboard-metrics-scraper,Attempt:0,},Image:&ImageSpec{Image:docker.io/kubernetesui/metrics-scraper@sha256:43227e8286fd379ee0415a5e2156a9439c4056807e3caa38e1dd413b0644807a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:115053965e86b2df4d78af78d7951b8644839d20a03820c6df59a261103315f7,State:CONTAINER_RUNNING,CreatedAt:1722899285929043098,Labels:map[string]string{io.kubernetes.container.name: dashboard-metrics-scraper,io.kubernetes.pod.name: dashboard-metrics-scraper-b5fc48f67-7cgnf,io.kubernetes.pod.namespace: kubernetes-dashboard,io.kubernetes.pod.uid: 0decaeed-b26f-4dff-bdb3-f5bf8ff0f09f,},Annotations:map[string]string{io.kube
rnetes.container.hash: f4dc3aea,io.kubernetes.container.ports: [{\"containerPort\":8000,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:67f39af54dc24ccf4589e247873d596e93a6cef19dc58ea160fe44607a495560,PodSandboxId:ea29386bcdbc47b014da7204fbb81996d75501044fbfa6f052b094d9b4f53e65,Metadata:&ContainerMetadata{Name:kubernetes-dashboard,Attempt:0,},Image:&ImageSpec{Image:docker.io/kubernetesui/dashboard@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:07655ddf2eebe5d250f7a72c25f638b27126805d61779741b4e62e69ba080558,State:CONTAINER_RUNNING,CreatedAt:1722899282229468090,Labels:map[string]string{io.kubernetes.container.name: kubernetes-dashboard,io.kubernetes.pod.name: kubernetes-dashboard-779776cb65-xzpg6,io.kubernetes
.pod.namespace: kubernetes-dashboard,io.kubernetes.pod.uid: 8f477a33-cee8-48d7-8b33-9779a2323a9f,},Annotations:map[string]string{io.kubernetes.container.hash: 4c4275b5,io.kubernetes.container.ports: [{\"containerPort\":9090,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:43259b8b6a9155da35d76a07687b7f685640c3a4bf4926d7589ddcc58c56976f,PodSandboxId:d4b608b8e98bfaa672afadf8a0ef32fa44d72a8df54ea13687df4313623146ee,Metadata:&ContainerMetadata{Name:mount-munger,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c,State:CONTAINER_EXITED,CreatedAt:1722899273299186859,Labels:map[string]string{io.k
ubernetes.container.name: mount-munger,io.kubernetes.pod.name: busybox-mount,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: e36767a8-7330-4158-842f-8949ab1a0d95,},Annotations:map[string]string{io.kubernetes.container.hash: 852802ca,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5034d1a7d4a45ec98dd9a048dae6f70e6cabc48f85fb3f53d4a1ed6ccbc36d01,PodSandboxId:975991fe739df26e0ba15070598e6e7bb2083565c3602ece609d2affe795457f,Metadata:&ContainerMetadata{Name:echoserver,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/echoserver@sha256:cb3386f863f6a4b05f33c191361723f9d5927ac287463b1bea633bf859475969,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:82e4c8a736a4fcf22b5ef9f6a4ff6207064c7187d7694bf97bd561605a538410,State:CONTAINER_RUNNING,CreatedAt:1722899263765197194,Labels:map[string]string{io.kuber
netes.container.name: echoserver,io.kubernetes.pod.name: hello-node-connect-57b4589c47-pb8sx,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 5ba59c63-9a77-49dc-b580-b4cf2136a81e,},Annotations:map[string]string{io.kubernetes.container.hash: 9dab4ebd,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:cf3ddab3a460b10d461a9bef233ceb39dcec06ce6c6e1753793e8d2abccfe913,PodSandboxId:74c905298e75c6529f4bbd71ee0414873ff2e62c0236db8da11b3dce779f13f9,Metadata:&ContainerMetadata{Name:echoserver,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/echoserver@sha256:cb3386f863f6a4b05f33c191361723f9d5927ac287463b1bea633bf859475969,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:82e4c8a736a4fcf22b5ef9f6a4ff6207064c7187d7694bf97bd561605a538410,State:CONTAINER_RUNNING,CreatedAt:1722899263659155897,Labels:map[string
]string{io.kubernetes.container.name: echoserver,io.kubernetes.pod.name: hello-node-6d85cfcfd8-bvls2,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 545b8cbe-5efb-4d9c-bcff-6fa15814afeb,},Annotations:map[string]string{io.kubernetes.container.hash: 94070ccf,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6a5aed1b8483a1d4d9372d447edb0e31f64b06ce34104c4cbcefc96712d9552a,PodSandboxId:69bbbfecd380ee8dca40f3ab21904d03affbf228aa035d38360503d54adf4c4e,Metadata:&ContainerMetadata{Name:mysql,Attempt:0,},Image:&ImageSpec{Image:docker.io/library/mysql@sha256:4bc6bc963e6d8443453676cae56536f4b8156d78bae03c0145cbe47c2aad73bb,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:5107333e08a87b836d48ff7528b1e84b9c86781cc9f1748bbc1b8c42a870d933,State:CONTAINER_RUNNING,CreatedAt:1722899259778287974,Labels:map[string
]string{io.kubernetes.container.name: mysql,io.kubernetes.pod.name: mysql-64454c8b5c-2272b,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 97b31bc8-211e-4919-bf6e-5b6f10bdb0bc,},Annotations:map[string]string{io.kubernetes.container.hash: 3f52aa18,io.kubernetes.container.ports: [{\"name\":\"mysql\",\"containerPort\":3306,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:52cb4fb0ea6086aeacb1791efb6ed966ed056f7760ce10261bf8f971d0aab296,PodSandboxId:92d4fe21f5f2e11933892689fd11040e75569b393f9afcbd51773737cd82e3e2,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,State
:CONTAINER_RUNNING,CreatedAt:1722899188574471204,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-functional-299463,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a3b2566ba423864dc7d692b6e91b80dc,},Annotations:map[string]string{io.kubernetes.container.hash: 82d9e66c,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:aec549fbcbee0fb35b8eb4a7505b9d9a2d2481db92401e43e26bd2c0a7e824e9,PodSandboxId:208337df63397bf81ab4acd525dca0b62f6b381f811a7f069a0c833ae5d07b9f,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:4,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAIN
ER_RUNNING,CreatedAt:1722899166736319686,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 93a40ded-7e39-4c49-bb7d-ebf5b9a1376a,},Annotations:map[string]string{io.kubernetes.container.hash: b3c4de78,io.kubernetes.container.restartCount: 4,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:14fa5fcc0776ac19d40e962d58b8fac9f3c423d61800193586b1c6c8082ff98b,PodSandboxId:92d4fe21f5f2e11933892689fd11040e75569b393f9afcbd51773737cd82e3e2,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,State:CONTAINER_EXITED,Created
At:1722899166835562461,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-functional-299463,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a3b2566ba423864dc7d692b6e91b80dc,},Annotations:map[string]string{io.kubernetes.container.hash: 82d9e66c,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3b7a6329469a8daa1f0052586315c5f052fb9ad4c9f229d30a4cd4f43fddb451,PodSandboxId:d7e7a284d8b823ce3065f99458edfd97739b1b59173a558f7ed0c44227163f94,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:3,},Image:&ImageSpec{Image:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,State:CONTAINER_RUNNING,CreatedAt:17228991667280
52399,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-wrmjf,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ab3ac050-cb0b-41b9-8bfb-da199c6555ac,},Annotations:map[string]string{io.kubernetes.container.hash: c32db193,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f3da8fa227bfc69449cda034d87fc4d634ec4cf63611679c9a25d7680bd18263,PodSandboxId:296620a86eef8adf142cca5283900fc1d86fb82a16683f4184a52970d6536324,Metadata:&ContainerMetadata{Name:coredns,Attempt:3,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1722899166740444231,Labels:map[string]string{io.ku
bernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-9hm54,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b78d97c0-e51e-4505-a099-a6c9bb76e303,},Annotations:map[string]string{io.kubernetes.container.hash: 44313478,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:49751aba72eec98d1612da22fd81450c446cc6aec3f32fbc741b9df28f56d447,PodSandboxId:3ad426cf1b4060363da01ccf2f1db27b9312348c3b053daf4723bee2878cf6b9,Metadata:&ContainerMetadata{Name:etcd,Attempt:3,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},User
SpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1722899161336840314,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-functional-299463,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8e297d9f4cd386cf2596e000d17804fa,},Annotations:map[string]string{io.kubernetes.container.hash: b7bf53ca,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:28788bc4ca8db465ec428c389ce83400231592ff8d617aecf0326d829b97361b,PodSandboxId:ffc157f67e20c0630718d248e362ce4f77bb4d62a64e9efb227f9895cc4568ed,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:3,},Image:&ImageSpec{Image:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,}
,ImageRef:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,State:CONTAINER_RUNNING,CreatedAt:1722899154857966830,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-functional-299463,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 77d10334c905c6af7f9e4a41c2593db0,},Annotations:map[string]string{io.kubernetes.container.hash: 7337c8d9,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e04b5c8429e3c806ce7e14a70a8c90816ec5add9f2ba0c67f5cc57dc598fd7dc,PodSandboxId:c5245291d0b066a9037e050469b905c9d8e4407606a2153c3c25678bc03b878f,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:3,},Image:&ImageSpec{Image:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},Ima
geRef:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,State:CONTAINER_RUNNING,CreatedAt:1722899154845562890,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-functional-299463,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1d7726436fca7eafaad35475c9d8f8ee,},Annotations:map[string]string{io.kubernetes.container.hash: bcb4918f,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:15bbcd193c2151deb1353a7e11a5831639c9909f652cf87ee7cf5d3ddb012707,PodSandboxId:296620a86eef8adf142cca5283900fc1d86fb82a16683f4184a52970d6536324,Metadata:&ContainerMetadata{Name:coredns,Attempt:2,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},Image
Ref:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1722899153516377303,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-9hm54,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b78d97c0-e51e-4505-a099-a6c9bb76e303,},Annotations:map[string]string{io.kubernetes.container.hash: 44313478,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ba33cf41bc99a98eeb2f4f77ecd320e237c376af51933411e5636a965e756797,PodSandboxId:d7e7a284d8b823ce3065f99458edfd97739b1b59173a558f7ed0c44227163f94,Metadata:&ContainerMet
adata{Name:kube-proxy,Attempt:2,},Image:&ImageSpec{Image:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,State:CONTAINER_EXITED,CreatedAt:1722899153113501458,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-wrmjf,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ab3ac050-cb0b-41b9-8bfb-da199c6555ac,},Annotations:map[string]string{io.kubernetes.container.hash: c32db193,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:948c1e01ffb6bf1cc5792dc9787ca1dde3c7df35251b8eec31b4dcdc24332cf6,PodSandboxId:cd445aa024fb14d5195a5c8e9d09a5890a9ec7c8f57a69ab5e1b6989bad4faed,Metadata:&ContainerMetadata{Name:storage-provisioner,Att
empt:3,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1722899116671372749,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 93a40ded-7e39-4c49-bb7d-ebf5b9a1376a,},Annotations:map[string]string{io.kubernetes.container.hash: b3c4de78,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c4851c0f61cd98c1555b57deca59545b1355fee71e3b42a82ef298d1e97f9acd,PodSandboxId:f6a51349696603fbc8b4e0eb3030eb2c424180116decf206f69abf08f5c21f2d,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Im
age:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_EXITED,CreatedAt:1722899112940822939,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-functional-299463,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8e297d9f4cd386cf2596e000d17804fa,},Annotations:map[string]string{io.kubernetes.container.hash: b7bf53ca,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:84ed7a9ec1281dc1a43f723c40a7a6c52fc9a9f9f4f426954587e8882775fb0a,PodSandboxId:2d2e0bc9db8b7231417a2df11bdd19ea7eed0e01c6fdc9530427748ce5acac8f,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:3edc18e7b76722eb2eb37a0858c09c
aacbd422d6e0cae4c2e5ce67bc9a9795e2,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,State:CONTAINER_EXITED,CreatedAt:1722899112907458931,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-functional-299463,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 77d10334c905c6af7f9e4a41c2593db0,},Annotations:map[string]string{io.kubernetes.container.hash: 7337c8d9,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8112526f29a49da40c17b7003db2ebb0d40cdfb8f7b3ab3c2500879640a9e79b,PodSandboxId:abad861d44749a125b35c34f19346a7b124807ef92a9ca21543884e4498b2689,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:76932a3b37d7eb138c8f47c9a2b4218f046
6dd273badf856f2ce2f0277e15b5e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,State:CONTAINER_EXITED,CreatedAt:1722899112916553910,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-functional-299463,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1d7726436fca7eafaad35475c9d8f8ee,},Annotations:map[string]string{io.kubernetes.container.hash: bcb4918f,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=48703074-53bd-43f1-bda0-c87b2ac491d3 name=/runtime.v1.RuntimeService/ListContainers
	Aug 05 23:09:58 functional-299463 crio[4826]: time="2024-08-05 23:09:58.295753776Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=72b39285-03fd-4157-9be0-828d0c1a932d name=/runtime.v1.RuntimeService/Version
	Aug 05 23:09:58 functional-299463 crio[4826]: time="2024-08-05 23:09:58.295849933Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=72b39285-03fd-4157-9be0-828d0c1a932d name=/runtime.v1.RuntimeService/Version
	Aug 05 23:09:58 functional-299463 crio[4826]: time="2024-08-05 23:09:58.297216860Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=953e2b8a-44cc-4f22-b5a8-5d9d38d0444f name=/runtime.v1.ImageService/ImageFsInfo
	Aug 05 23:09:58 functional-299463 crio[4826]: time="2024-08-05 23:09:58.298013078Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1722899398297988988,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:250737,},InodesUsed:&UInt64Value{Value:122,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=953e2b8a-44cc-4f22-b5a8-5d9d38d0444f name=/runtime.v1.ImageService/ImageFsInfo
	Aug 05 23:09:58 functional-299463 crio[4826]: time="2024-08-05 23:09:58.298699431Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=3b766e4f-89af-4582-b6d5-d2583c6d30a0 name=/runtime.v1.RuntimeService/ListContainers
	Aug 05 23:09:58 functional-299463 crio[4826]: time="2024-08-05 23:09:58.298759908Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=3b766e4f-89af-4582-b6d5-d2583c6d30a0 name=/runtime.v1.RuntimeService/ListContainers
	Aug 05 23:09:58 functional-299463 crio[4826]: time="2024-08-05 23:09:58.299186951Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:449e48ecf1865c8a1a6c6fe801b37b5c0a5a51627a302479bff784fb88c2ccd2,PodSandboxId:72c9576d5d0ca0c08e82c0768f4055c2e040552c8d4ec7a7f9971c915206f469,Metadata:&ContainerMetadata{Name:dashboard-metrics-scraper,Attempt:0,},Image:&ImageSpec{Image:docker.io/kubernetesui/metrics-scraper@sha256:43227e8286fd379ee0415a5e2156a9439c4056807e3caa38e1dd413b0644807a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:115053965e86b2df4d78af78d7951b8644839d20a03820c6df59a261103315f7,State:CONTAINER_RUNNING,CreatedAt:1722899285929043098,Labels:map[string]string{io.kubernetes.container.name: dashboard-metrics-scraper,io.kubernetes.pod.name: dashboard-metrics-scraper-b5fc48f67-7cgnf,io.kubernetes.pod.namespace: kubernetes-dashboard,io.kubernetes.pod.uid: 0decaeed-b26f-4dff-bdb3-f5bf8ff0f09f,},Annotations:map[string]string{io.kube
rnetes.container.hash: f4dc3aea,io.kubernetes.container.ports: [{\"containerPort\":8000,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:67f39af54dc24ccf4589e247873d596e93a6cef19dc58ea160fe44607a495560,PodSandboxId:ea29386bcdbc47b014da7204fbb81996d75501044fbfa6f052b094d9b4f53e65,Metadata:&ContainerMetadata{Name:kubernetes-dashboard,Attempt:0,},Image:&ImageSpec{Image:docker.io/kubernetesui/dashboard@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:07655ddf2eebe5d250f7a72c25f638b27126805d61779741b4e62e69ba080558,State:CONTAINER_RUNNING,CreatedAt:1722899282229468090,Labels:map[string]string{io.kubernetes.container.name: kubernetes-dashboard,io.kubernetes.pod.name: kubernetes-dashboard-779776cb65-xzpg6,io.kubernetes
.pod.namespace: kubernetes-dashboard,io.kubernetes.pod.uid: 8f477a33-cee8-48d7-8b33-9779a2323a9f,},Annotations:map[string]string{io.kubernetes.container.hash: 4c4275b5,io.kubernetes.container.ports: [{\"containerPort\":9090,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:43259b8b6a9155da35d76a07687b7f685640c3a4bf4926d7589ddcc58c56976f,PodSandboxId:d4b608b8e98bfaa672afadf8a0ef32fa44d72a8df54ea13687df4313623146ee,Metadata:&ContainerMetadata{Name:mount-munger,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c,State:CONTAINER_EXITED,CreatedAt:1722899273299186859,Labels:map[string]string{io.k
ubernetes.container.name: mount-munger,io.kubernetes.pod.name: busybox-mount,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: e36767a8-7330-4158-842f-8949ab1a0d95,},Annotations:map[string]string{io.kubernetes.container.hash: 852802ca,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5034d1a7d4a45ec98dd9a048dae6f70e6cabc48f85fb3f53d4a1ed6ccbc36d01,PodSandboxId:975991fe739df26e0ba15070598e6e7bb2083565c3602ece609d2affe795457f,Metadata:&ContainerMetadata{Name:echoserver,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/echoserver@sha256:cb3386f863f6a4b05f33c191361723f9d5927ac287463b1bea633bf859475969,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:82e4c8a736a4fcf22b5ef9f6a4ff6207064c7187d7694bf97bd561605a538410,State:CONTAINER_RUNNING,CreatedAt:1722899263765197194,Labels:map[string]string{io.kuber
netes.container.name: echoserver,io.kubernetes.pod.name: hello-node-connect-57b4589c47-pb8sx,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 5ba59c63-9a77-49dc-b580-b4cf2136a81e,},Annotations:map[string]string{io.kubernetes.container.hash: 9dab4ebd,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:cf3ddab3a460b10d461a9bef233ceb39dcec06ce6c6e1753793e8d2abccfe913,PodSandboxId:74c905298e75c6529f4bbd71ee0414873ff2e62c0236db8da11b3dce779f13f9,Metadata:&ContainerMetadata{Name:echoserver,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/echoserver@sha256:cb3386f863f6a4b05f33c191361723f9d5927ac287463b1bea633bf859475969,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:82e4c8a736a4fcf22b5ef9f6a4ff6207064c7187d7694bf97bd561605a538410,State:CONTAINER_RUNNING,CreatedAt:1722899263659155897,Labels:map[string
]string{io.kubernetes.container.name: echoserver,io.kubernetes.pod.name: hello-node-6d85cfcfd8-bvls2,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 545b8cbe-5efb-4d9c-bcff-6fa15814afeb,},Annotations:map[string]string{io.kubernetes.container.hash: 94070ccf,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6a5aed1b8483a1d4d9372d447edb0e31f64b06ce34104c4cbcefc96712d9552a,PodSandboxId:69bbbfecd380ee8dca40f3ab21904d03affbf228aa035d38360503d54adf4c4e,Metadata:&ContainerMetadata{Name:mysql,Attempt:0,},Image:&ImageSpec{Image:docker.io/library/mysql@sha256:4bc6bc963e6d8443453676cae56536f4b8156d78bae03c0145cbe47c2aad73bb,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:5107333e08a87b836d48ff7528b1e84b9c86781cc9f1748bbc1b8c42a870d933,State:CONTAINER_RUNNING,CreatedAt:1722899259778287974,Labels:map[string
]string{io.kubernetes.container.name: mysql,io.kubernetes.pod.name: mysql-64454c8b5c-2272b,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 97b31bc8-211e-4919-bf6e-5b6f10bdb0bc,},Annotations:map[string]string{io.kubernetes.container.hash: 3f52aa18,io.kubernetes.container.ports: [{\"name\":\"mysql\",\"containerPort\":3306,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:52cb4fb0ea6086aeacb1791efb6ed966ed056f7760ce10261bf8f971d0aab296,PodSandboxId:92d4fe21f5f2e11933892689fd11040e75569b393f9afcbd51773737cd82e3e2,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,State
:CONTAINER_RUNNING,CreatedAt:1722899188574471204,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-functional-299463,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a3b2566ba423864dc7d692b6e91b80dc,},Annotations:map[string]string{io.kubernetes.container.hash: 82d9e66c,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:aec549fbcbee0fb35b8eb4a7505b9d9a2d2481db92401e43e26bd2c0a7e824e9,PodSandboxId:208337df63397bf81ab4acd525dca0b62f6b381f811a7f069a0c833ae5d07b9f,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:4,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAIN
ER_RUNNING,CreatedAt:1722899166736319686,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 93a40ded-7e39-4c49-bb7d-ebf5b9a1376a,},Annotations:map[string]string{io.kubernetes.container.hash: b3c4de78,io.kubernetes.container.restartCount: 4,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:14fa5fcc0776ac19d40e962d58b8fac9f3c423d61800193586b1c6c8082ff98b,PodSandboxId:92d4fe21f5f2e11933892689fd11040e75569b393f9afcbd51773737cd82e3e2,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,State:CONTAINER_EXITED,Created
At:1722899166835562461,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-functional-299463,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a3b2566ba423864dc7d692b6e91b80dc,},Annotations:map[string]string{io.kubernetes.container.hash: 82d9e66c,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3b7a6329469a8daa1f0052586315c5f052fb9ad4c9f229d30a4cd4f43fddb451,PodSandboxId:d7e7a284d8b823ce3065f99458edfd97739b1b59173a558f7ed0c44227163f94,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:3,},Image:&ImageSpec{Image:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,State:CONTAINER_RUNNING,CreatedAt:17228991667280
52399,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-wrmjf,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ab3ac050-cb0b-41b9-8bfb-da199c6555ac,},Annotations:map[string]string{io.kubernetes.container.hash: c32db193,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f3da8fa227bfc69449cda034d87fc4d634ec4cf63611679c9a25d7680bd18263,PodSandboxId:296620a86eef8adf142cca5283900fc1d86fb82a16683f4184a52970d6536324,Metadata:&ContainerMetadata{Name:coredns,Attempt:3,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1722899166740444231,Labels:map[string]string{io.ku
bernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-9hm54,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b78d97c0-e51e-4505-a099-a6c9bb76e303,},Annotations:map[string]string{io.kubernetes.container.hash: 44313478,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:49751aba72eec98d1612da22fd81450c446cc6aec3f32fbc741b9df28f56d447,PodSandboxId:3ad426cf1b4060363da01ccf2f1db27b9312348c3b053daf4723bee2878cf6b9,Metadata:&ContainerMetadata{Name:etcd,Attempt:3,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},User
SpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1722899161336840314,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-functional-299463,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8e297d9f4cd386cf2596e000d17804fa,},Annotations:map[string]string{io.kubernetes.container.hash: b7bf53ca,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:28788bc4ca8db465ec428c389ce83400231592ff8d617aecf0326d829b97361b,PodSandboxId:ffc157f67e20c0630718d248e362ce4f77bb4d62a64e9efb227f9895cc4568ed,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:3,},Image:&ImageSpec{Image:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,}
,ImageRef:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,State:CONTAINER_RUNNING,CreatedAt:1722899154857966830,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-functional-299463,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 77d10334c905c6af7f9e4a41c2593db0,},Annotations:map[string]string{io.kubernetes.container.hash: 7337c8d9,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e04b5c8429e3c806ce7e14a70a8c90816ec5add9f2ba0c67f5cc57dc598fd7dc,PodSandboxId:c5245291d0b066a9037e050469b905c9d8e4407606a2153c3c25678bc03b878f,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:3,},Image:&ImageSpec{Image:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},Ima
geRef:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,State:CONTAINER_RUNNING,CreatedAt:1722899154845562890,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-functional-299463,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1d7726436fca7eafaad35475c9d8f8ee,},Annotations:map[string]string{io.kubernetes.container.hash: bcb4918f,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:15bbcd193c2151deb1353a7e11a5831639c9909f652cf87ee7cf5d3ddb012707,PodSandboxId:296620a86eef8adf142cca5283900fc1d86fb82a16683f4184a52970d6536324,Metadata:&ContainerMetadata{Name:coredns,Attempt:2,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},Image
Ref:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1722899153516377303,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-9hm54,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b78d97c0-e51e-4505-a099-a6c9bb76e303,},Annotations:map[string]string{io.kubernetes.container.hash: 44313478,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ba33cf41bc99a98eeb2f4f77ecd320e237c376af51933411e5636a965e756797,PodSandboxId:d7e7a284d8b823ce3065f99458edfd97739b1b59173a558f7ed0c44227163f94,Metadata:&ContainerMet
adata{Name:kube-proxy,Attempt:2,},Image:&ImageSpec{Image:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,State:CONTAINER_EXITED,CreatedAt:1722899153113501458,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-wrmjf,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ab3ac050-cb0b-41b9-8bfb-da199c6555ac,},Annotations:map[string]string{io.kubernetes.container.hash: c32db193,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:948c1e01ffb6bf1cc5792dc9787ca1dde3c7df35251b8eec31b4dcdc24332cf6,PodSandboxId:cd445aa024fb14d5195a5c8e9d09a5890a9ec7c8f57a69ab5e1b6989bad4faed,Metadata:&ContainerMetadata{Name:storage-provisioner,Att
empt:3,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1722899116671372749,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 93a40ded-7e39-4c49-bb7d-ebf5b9a1376a,},Annotations:map[string]string{io.kubernetes.container.hash: b3c4de78,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c4851c0f61cd98c1555b57deca59545b1355fee71e3b42a82ef298d1e97f9acd,PodSandboxId:f6a51349696603fbc8b4e0eb3030eb2c424180116decf206f69abf08f5c21f2d,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Im
age:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_EXITED,CreatedAt:1722899112940822939,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-functional-299463,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8e297d9f4cd386cf2596e000d17804fa,},Annotations:map[string]string{io.kubernetes.container.hash: b7bf53ca,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:84ed7a9ec1281dc1a43f723c40a7a6c52fc9a9f9f4f426954587e8882775fb0a,PodSandboxId:2d2e0bc9db8b7231417a2df11bdd19ea7eed0e01c6fdc9530427748ce5acac8f,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:3edc18e7b76722eb2eb37a0858c09c
aacbd422d6e0cae4c2e5ce67bc9a9795e2,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,State:CONTAINER_EXITED,CreatedAt:1722899112907458931,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-functional-299463,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 77d10334c905c6af7f9e4a41c2593db0,},Annotations:map[string]string{io.kubernetes.container.hash: 7337c8d9,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8112526f29a49da40c17b7003db2ebb0d40cdfb8f7b3ab3c2500879640a9e79b,PodSandboxId:abad861d44749a125b35c34f19346a7b124807ef92a9ca21543884e4498b2689,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:76932a3b37d7eb138c8f47c9a2b4218f046
6dd273badf856f2ce2f0277e15b5e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,State:CONTAINER_EXITED,CreatedAt:1722899112916553910,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-functional-299463,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1d7726436fca7eafaad35475c9d8f8ee,},Annotations:map[string]string{io.kubernetes.container.hash: bcb4918f,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=3b766e4f-89af-4582-b6d5-d2583c6d30a0 name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                            CREATED              STATE               NAME                        ATTEMPT             POD ID              POD
	449e48ecf1865       docker.io/kubernetesui/metrics-scraper@sha256:43227e8286fd379ee0415a5e2156a9439c4056807e3caa38e1dd413b0644807a   About a minute ago   Running             dashboard-metrics-scraper   0                   72c9576d5d0ca       dashboard-metrics-scraper-b5fc48f67-7cgnf
	67f39af54dc24       docker.io/kubernetesui/dashboard@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93         About a minute ago   Running             kubernetes-dashboard        0                   ea29386bcdbc4       kubernetes-dashboard-779776cb65-xzpg6
	43259b8b6a915       gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e              2 minutes ago        Exited              mount-munger                0                   d4b608b8e98bf       busybox-mount
	5034d1a7d4a45       registry.k8s.io/echoserver@sha256:cb3386f863f6a4b05f33c191361723f9d5927ac287463b1bea633bf859475969               2 minutes ago        Running             echoserver                  0                   975991fe739df       hello-node-connect-57b4589c47-pb8sx
	cf3ddab3a460b       registry.k8s.io/echoserver@sha256:cb3386f863f6a4b05f33c191361723f9d5927ac287463b1bea633bf859475969               2 minutes ago        Running             echoserver                  0                   74c905298e75c       hello-node-6d85cfcfd8-bvls2
	6a5aed1b8483a       docker.io/library/mysql@sha256:4bc6bc963e6d8443453676cae56536f4b8156d78bae03c0145cbe47c2aad73bb                  2 minutes ago        Running             mysql                       0                   69bbbfecd380e       mysql-64454c8b5c-2272b
	52cb4fb0ea608       1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d                                                 3 minutes ago        Running             kube-apiserver              2                   92d4fe21f5f2e       kube-apiserver-functional-299463
	14fa5fcc0776a       1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d                                                 3 minutes ago        Exited              kube-apiserver              1                   92d4fe21f5f2e       kube-apiserver-functional-299463
	f3da8fa227bfc       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4                                                 3 minutes ago        Running             coredns                     3                   296620a86eef8       coredns-7db6d8ff4d-9hm54
	aec549fbcbee0       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                                 3 minutes ago        Running             storage-provisioner         4                   208337df63397       storage-provisioner
	3b7a6329469a8       55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1                                                 3 minutes ago        Running             kube-proxy                  3                   d7e7a284d8b82       kube-proxy-wrmjf
	49751aba72eec       3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899                                                 3 minutes ago        Running             etcd                        3                   3ad426cf1b406       etcd-functional-299463
	28788bc4ca8db       3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2                                                 4 minutes ago        Running             kube-scheduler              3                   ffc157f67e20c       kube-scheduler-functional-299463
	e04b5c8429e3c       76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e                                                 4 minutes ago        Running             kube-controller-manager     3                   c5245291d0b06       kube-controller-manager-functional-299463
	15bbcd193c215       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4                                                 4 minutes ago        Exited              coredns                     2                   296620a86eef8       coredns-7db6d8ff4d-9hm54
	ba33cf41bc99a       55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1                                                 4 minutes ago        Exited              kube-proxy                  2                   d7e7a284d8b82       kube-proxy-wrmjf
	948c1e01ffb6b       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                                 4 minutes ago        Exited              storage-provisioner         3                   cd445aa024fb1       storage-provisioner
	c4851c0f61cd9       3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899                                                 4 minutes ago        Exited              etcd                        2                   f6a5134969660       etcd-functional-299463
	8112526f29a49       76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e                                                 4 minutes ago        Exited              kube-controller-manager     2                   abad861d44749       kube-controller-manager-functional-299463
	84ed7a9ec1281       3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2                                                 4 minutes ago        Exited              kube-scheduler              2                   2d2e0bc9db8b7       kube-scheduler-functional-299463
	
	
	==> coredns [15bbcd193c2151deb1353a7e11a5831639c9909f652cf87ee7cf5d3ddb012707] <==
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 6c8bd46af3d98e03c4ae8e438c65dd0c69a5f817565481bcf1725dd66ff794963b7938c81e3a23d4c2ad9e52f818076e819219c79e8007dd90564767ed68ba4c
	CoreDNS-1.11.1
	linux/amd64, go1.20.7, ae2bbc2
	[INFO] plugin/health: Going into lameduck mode for 5s
	[INFO] 127.0.0.1:49736 - 18680 "HINFO IN 3518748112436738359.8122001145050231993. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.013798475s
	
	
	==> coredns [f3da8fa227bfc69449cda034d87fc4d634ec4cf63611679c9a25d7680bd18263] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 6c8bd46af3d98e03c4ae8e438c65dd0c69a5f817565481bcf1725dd66ff794963b7938c81e3a23d4c2ad9e52f818076e819219c79e8007dd90564767ed68ba4c
	CoreDNS-1.11.1
	linux/amd64, go1.20.7, ae2bbc2
	[INFO] 127.0.0.1:45851 - 26423 "HINFO IN 3089233811798295286.8924792917686477663. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.025043783s
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: watch of *v1.Service ended with: very short watch: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Unexpected watch close - watch lasted less than a second and no items received
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: watch of *v1.Namespace ended with: very short watch: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Unexpected watch close - watch lasted less than a second and no items received
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?resourceVersion=548": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?resourceVersion=548": dial tcp 10.96.0.1:443: connect: connection refused
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?resourceVersion=548": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?resourceVersion=548": dial tcp 10.96.0.1:443: connect: connection refused
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?resourceVersion=548": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?resourceVersion=548": dial tcp 10.96.0.1:443: connect: connection refused
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?resourceVersion=548": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?resourceVersion=548": dial tcp 10.96.0.1:443: connect: connection refused
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?resourceVersion=548": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?resourceVersion=548": dial tcp 10.96.0.1:443: connect: connection refused
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?resourceVersion=548": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?resourceVersion=548": dial tcp 10.96.0.1:443: connect: connection refused
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?resourceVersion=548": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?resourceVersion=548": dial tcp 10.96.0.1:443: connect: connection refused
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?resourceVersion=548": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?resourceVersion=548": dial tcp 10.96.0.1:443: connect: connection refused
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?resourceVersion=548": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?resourceVersion=548": dial tcp 10.96.0.1:443: connect: connection refused
	
	
	==> describe nodes <==
	Name:               functional-299463
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=functional-299463
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=a179f531dd2dbe55e0d6074abcbc378280f91bb4
	                    minikube.k8s.io/name=functional-299463
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_08_05T23_03_57_0700
	                    minikube.k8s.io/version=v1.33.1
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 05 Aug 2024 23:03:54 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  functional-299463
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 05 Aug 2024 23:09:49 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 05 Aug 2024 23:08:39 +0000   Mon, 05 Aug 2024 23:03:52 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 05 Aug 2024 23:08:39 +0000   Mon, 05 Aug 2024 23:03:52 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 05 Aug 2024 23:08:39 +0000   Mon, 05 Aug 2024 23:03:52 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 05 Aug 2024 23:08:39 +0000   Mon, 05 Aug 2024 23:06:36 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.190
	  Hostname:    functional-299463
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             3912780Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             3912780Ki
	  pods:               110
	System Info:
	  Machine ID:                 c7212e5b5898410aa86e15ab5cf0a69a
	  System UUID:                c7212e5b-5898-410a-a86e-15ab5cf0a69a
	  Boot ID:                    e20aeb93-e40b-4770-8d61-60dc99268449
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.30.3
	  Kube-Proxy Version:         v1.30.3
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (12 in total)
	  Namespace                   Name                                         CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                         ------------  ----------  ---------------  -------------  ---
	  default                     hello-node-6d85cfcfd8-bvls2                  0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         3m9s
	  default                     hello-node-connect-57b4589c47-pb8sx          0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         3m7s
	  default                     mysql-64454c8b5c-2272b                       600m (30%!)(MISSING)    700m (35%!)(MISSING)  512Mi (13%!)(MISSING)      700Mi (18%!)(MISSING)    2m58s
	  kube-system                 coredns-7db6d8ff4d-9hm54                     100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (1%!)(MISSING)        170Mi (4%!)(MISSING)     5m48s
	  kube-system                 etcd-functional-299463                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (2%!)(MISSING)       0 (0%!)(MISSING)         6m1s
	  kube-system                 kube-apiserver-functional-299463             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         3m52s
	  kube-system                 kube-controller-manager-functional-299463    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         6m1s
	  kube-system                 kube-proxy-wrmjf                             0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         5m48s
	  kube-system                 kube-scheduler-functional-299463             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         6m1s
	  kube-system                 storage-provisioner                          0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         5m46s
	  kubernetes-dashboard        dashboard-metrics-scraper-b5fc48f67-7cgnf    0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         2m5s
	  kubernetes-dashboard        kubernetes-dashboard-779776cb65-xzpg6        0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         2m5s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                1350m (67%!)(MISSING)  700m (35%!)(MISSING)
	  memory             682Mi (17%!)(MISSING)  870Mi (22%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)       0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)       0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                    From             Message
	  ----    ------                   ----                   ----             -------
	  Normal  Starting                 5m45s                  kube-proxy       
	  Normal  Starting                 3m51s                  kube-proxy       
	  Normal  Starting                 4m42s                  kube-proxy       
	  Normal  NodeAllocatableEnforced  6m7s                   kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  6m7s (x8 over 6m7s)    kubelet          Node functional-299463 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    6m7s (x8 over 6m7s)    kubelet          Node functional-299463 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     6m7s (x7 over 6m7s)    kubelet          Node functional-299463 status is now: NodeHasSufficientPID
	  Normal  Starting                 6m7s                   kubelet          Starting kubelet.
	  Normal  Starting                 6m1s                   kubelet          Starting kubelet.
	  Normal  NodeAllocatableEnforced  6m1s                   kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  6m1s                   kubelet          Node functional-299463 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    6m1s                   kubelet          Node functional-299463 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     6m1s                   kubelet          Node functional-299463 status is now: NodeHasSufficientPID
	  Normal  NodeReady                6m                     kubelet          Node functional-299463 status is now: NodeReady
	  Normal  RegisteredNode           5m49s                  node-controller  Node functional-299463 event: Registered Node functional-299463 in Controller
	  Normal  NodeHasSufficientMemory  4m46s (x8 over 4m46s)  kubelet          Node functional-299463 status is now: NodeHasSufficientMemory
	  Normal  Starting                 4m46s                  kubelet          Starting kubelet.
	  Normal  NodeHasNoDiskPressure    4m46s (x8 over 4m46s)  kubelet          Node functional-299463 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     4m46s (x7 over 4m46s)  kubelet          Node functional-299463 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  4m46s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           4m30s                  node-controller  Node functional-299463 event: Registered Node functional-299463 in Controller
	  Normal  NodeHasSufficientMemory  3m53s                  kubelet          Node functional-299463 status is now: NodeHasSufficientMemory
	  Normal  Starting                 3m53s                  kubelet          Starting kubelet.
	  Normal  NodeHasNoDiskPressure    3m53s                  kubelet          Node functional-299463 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     3m53s                  kubelet          Node functional-299463 status is now: NodeHasSufficientPID
	  Normal  NodeNotReady             3m53s                  kubelet          Node functional-299463 status is now: NodeNotReady
	  Normal  NodeAllocatableEnforced  3m53s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeReady                3m22s                  kubelet          Node functional-299463 status is now: NodeReady
	  Normal  RegisteredNode           3m21s                  node-controller  Node functional-299463 event: Registered Node functional-299463 in Controller
	
	
	==> dmesg <==
	[  +0.157052] systemd-fstab-generator[2564]: Ignoring "noauto" option for root device
	[  +0.140209] systemd-fstab-generator[2576]: Ignoring "noauto" option for root device
	[Aug 5 23:05] systemd-fstab-generator[2604]: Ignoring "noauto" option for root device
	[  +6.989784] systemd-fstab-generator[2732]: Ignoring "noauto" option for root device
	[  +0.081691] kauditd_printk_skb: 100 callbacks suppressed
	[  +4.123358] systemd-fstab-generator[3539]: Ignoring "noauto" option for root device
	[  +0.894820] kauditd_printk_skb: 133 callbacks suppressed
	[ +15.622813] kauditd_printk_skb: 12 callbacks suppressed
	[  +3.275944] systemd-fstab-generator[3942]: Ignoring "noauto" option for root device
	[ +18.404712] systemd-fstab-generator[4745]: Ignoring "noauto" option for root device
	[  +0.075893] kauditd_printk_skb: 14 callbacks suppressed
	[  +0.060436] systemd-fstab-generator[4757]: Ignoring "noauto" option for root device
	[  +0.168845] systemd-fstab-generator[4771]: Ignoring "noauto" option for root device
	[  +0.122477] systemd-fstab-generator[4783]: Ignoring "noauto" option for root device
	[  +0.293177] systemd-fstab-generator[4811]: Ignoring "noauto" option for root device
	[  +1.334909] systemd-fstab-generator[4936]: Ignoring "noauto" option for root device
	[  +3.188245] kauditd_printk_skb: 197 callbacks suppressed
	[Aug 5 23:06] systemd-fstab-generator[5697]: Ignoring "noauto" option for root device
	[  +1.698433] kauditd_printk_skb: 33 callbacks suppressed
	[  +0.578798] systemd-fstab-generator[6114]: Ignoring "noauto" option for root device
	[ +21.180740] kauditd_printk_skb: 41 callbacks suppressed
	[Aug 5 23:07] kauditd_printk_skb: 26 callbacks suppressed
	[ +10.304497] kauditd_printk_skb: 11 callbacks suppressed
	[  +5.219073] kauditd_printk_skb: 11 callbacks suppressed
	[Aug 5 23:08] kauditd_printk_skb: 24 callbacks suppressed
	
	
	==> etcd [49751aba72eec98d1612da22fd81450c446cc6aec3f32fbc741b9df28f56d447] <==
	{"level":"info","ts":"2024-08-05T23:07:37.990414Z","caller":"traceutil/trace.go:171","msg":"trace[496817736] range","detail":"{range_begin:/registry/pods/default/; range_end:/registry/pods/default0; response_count:4; response_revision:754; }","duration":"342.237413ms","start":"2024-08-05T23:07:37.648171Z","end":"2024-08-05T23:07:37.990408Z","steps":["trace[496817736] 'range keys from in-memory index tree'  (duration: 342.087875ms)"],"step_count":1}
	{"level":"warn","ts":"2024-08-05T23:07:37.990446Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-08-05T23:07:37.648136Z","time spent":"342.297005ms","remote":"127.0.0.1:54848","response type":"/etcdserverpb.KV/Range","request count":0,"request size":50,"response count":4,"response size":10855,"request content":"key:\"/registry/pods/default/\" range_end:\"/registry/pods/default0\" "}
	{"level":"warn","ts":"2024-08-05T23:07:41.402971Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"114.853814ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/minions/\" range_end:\"/registry/minions0\" count_only:true ","response":"range_response_count:0 size:7"}
	{"level":"info","ts":"2024-08-05T23:07:41.40302Z","caller":"traceutil/trace.go:171","msg":"trace[109710534] range","detail":"{range_begin:/registry/minions/; range_end:/registry/minions0; response_count:0; response_revision:763; }","duration":"114.940075ms","start":"2024-08-05T23:07:41.28807Z","end":"2024-08-05T23:07:41.40301Z","steps":["trace[109710534] 'count revisions from in-memory index tree'  (duration: 114.800173ms)"],"step_count":1}
	{"level":"warn","ts":"2024-08-05T23:07:41.403273Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"511.526098ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/health\" ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-08-05T23:07:41.403299Z","caller":"traceutil/trace.go:171","msg":"trace[1886500553] range","detail":"{range_begin:/registry/health; range_end:; response_count:0; response_revision:763; }","duration":"511.567447ms","start":"2024-08-05T23:07:40.891718Z","end":"2024-08-05T23:07:41.403285Z","steps":["trace[1886500553] 'range keys from in-memory index tree'  (duration: 511.411838ms)"],"step_count":1}
	{"level":"warn","ts":"2024-08-05T23:07:41.403317Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-08-05T23:07:40.891698Z","time spent":"511.611896ms","remote":"127.0.0.1:54706","response type":"/etcdserverpb.KV/Range","request count":0,"request size":18,"response count":0,"response size":28,"request content":"key:\"/registry/health\" "}
	{"level":"warn","ts":"2024-08-05T23:07:41.403448Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"436.158632ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods/default/\" range_end:\"/registry/pods/default0\" ","response":"range_response_count:4 size:10908"}
	{"level":"info","ts":"2024-08-05T23:07:41.403462Z","caller":"traceutil/trace.go:171","msg":"trace[1213326419] range","detail":"{range_begin:/registry/pods/default/; range_end:/registry/pods/default0; response_count:4; response_revision:763; }","duration":"436.197151ms","start":"2024-08-05T23:07:40.96726Z","end":"2024-08-05T23:07:41.403457Z","steps":["trace[1213326419] 'range keys from in-memory index tree'  (duration: 436.056023ms)"],"step_count":1}
	{"level":"warn","ts":"2024-08-05T23:07:41.403481Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-08-05T23:07:40.967204Z","time spent":"436.273944ms","remote":"127.0.0.1:54848","response type":"/etcdserverpb.KV/Range","request count":0,"request size":50,"response count":4,"response size":10931,"request content":"key:\"/registry/pods/default/\" range_end:\"/registry/pods/default0\" "}
	{"level":"warn","ts":"2024-08-05T23:07:41.403816Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"303.853803ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods/default/\" range_end:\"/registry/pods/default0\" ","response":"range_response_count:4 size:10908"}
	{"level":"info","ts":"2024-08-05T23:07:41.403842Z","caller":"traceutil/trace.go:171","msg":"trace[162457105] range","detail":"{range_begin:/registry/pods/default/; range_end:/registry/pods/default0; response_count:4; response_revision:763; }","duration":"303.899485ms","start":"2024-08-05T23:07:41.099932Z","end":"2024-08-05T23:07:41.403832Z","steps":["trace[162457105] 'range keys from in-memory index tree'  (duration: 303.763035ms)"],"step_count":1}
	{"level":"warn","ts":"2024-08-05T23:07:41.403858Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-08-05T23:07:41.099914Z","time spent":"303.940338ms","remote":"127.0.0.1:54848","response type":"/etcdserverpb.KV/Range","request count":0,"request size":50,"response count":4,"response size":10931,"request content":"key:\"/registry/pods/default/\" range_end:\"/registry/pods/default0\" "}
	{"level":"warn","ts":"2024-08-05T23:07:41.404023Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"205.911995ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods/default/\" range_end:\"/registry/pods/default0\" ","response":"range_response_count:4 size:10908"}
	{"level":"info","ts":"2024-08-05T23:07:41.404044Z","caller":"traceutil/trace.go:171","msg":"trace[710098782] range","detail":"{range_begin:/registry/pods/default/; range_end:/registry/pods/default0; response_count:4; response_revision:763; }","duration":"205.952997ms","start":"2024-08-05T23:07:41.198084Z","end":"2024-08-05T23:07:41.404037Z","steps":["trace[710098782] 'range keys from in-memory index tree'  (duration: 205.832072ms)"],"step_count":1}
	{"level":"info","ts":"2024-08-05T23:07:47.549816Z","caller":"traceutil/trace.go:171","msg":"trace[1914744036] linearizableReadLoop","detail":"{readStateIndex:875; appliedIndex:874; }","duration":"351.72255ms","start":"2024-08-05T23:07:47.198077Z","end":"2024-08-05T23:07:47.5498Z","steps":["trace[1914744036] 'read index received'  (duration: 349.16104ms)","trace[1914744036] 'applied index is now lower than readState.Index'  (duration: 2.560533ms)"],"step_count":2}
	{"level":"warn","ts":"2024-08-05T23:07:47.549987Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"351.894524ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods/default/\" range_end:\"/registry/pods/default0\" ","response":"range_response_count:4 size:11024"}
	{"level":"info","ts":"2024-08-05T23:07:47.550009Z","caller":"traceutil/trace.go:171","msg":"trace[1349515650] range","detail":"{range_begin:/registry/pods/default/; range_end:/registry/pods/default0; response_count:4; response_revision:785; }","duration":"352.007847ms","start":"2024-08-05T23:07:47.197994Z","end":"2024-08-05T23:07:47.550001Z","steps":["trace[1349515650] 'agreement among raft nodes before linearized reading'  (duration: 351.887558ms)"],"step_count":1}
	{"level":"warn","ts":"2024-08-05T23:07:47.550034Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-08-05T23:07:47.197977Z","time spent":"352.052911ms","remote":"127.0.0.1:54848","response type":"/etcdserverpb.KV/Range","request count":0,"request size":50,"response count":4,"response size":11047,"request content":"key:\"/registry/pods/default/\" range_end:\"/registry/pods/default0\" "}
	{"level":"warn","ts":"2024-08-05T23:08:02.041096Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-08-05T23:08:01.669935Z","time spent":"371.157981ms","remote":"127.0.0.1:54728","response type":"/etcdserverpb.Lease/LeaseGrant","request count":-1,"request size":-1,"response count":-1,"response size":-1,"request content":""}
	{"level":"warn","ts":"2024-08-05T23:08:02.04163Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"236.014105ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath\" ","response":"range_response_count:1 size:1117"}
	{"level":"info","ts":"2024-08-05T23:08:02.041719Z","caller":"traceutil/trace.go:171","msg":"trace[863727950] range","detail":"{range_begin:/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath; range_end:; response_count:1; response_revision:869; }","duration":"236.132469ms","start":"2024-08-05T23:08:01.805577Z","end":"2024-08-05T23:08:02.04171Z","steps":["trace[863727950] 'agreement among raft nodes before linearized reading'  (duration: 236.009837ms)"],"step_count":1}
	{"level":"info","ts":"2024-08-05T23:08:02.041489Z","caller":"traceutil/trace.go:171","msg":"trace[2090498805] linearizableReadLoop","detail":"{readStateIndex:962; appliedIndex:962; }","duration":"235.809052ms","start":"2024-08-05T23:08:01.805602Z","end":"2024-08-05T23:08:02.041411Z","steps":["trace[2090498805] 'read index received'  (duration: 235.805054ms)","trace[2090498805] 'applied index is now lower than readState.Index'  (duration: 3.051µs)"],"step_count":2}
	{"level":"warn","ts":"2024-08-05T23:08:02.048707Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"171.421276ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/health\" ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-08-05T23:08:02.048785Z","caller":"traceutil/trace.go:171","msg":"trace[21918549] range","detail":"{range_begin:/registry/health; range_end:; response_count:0; response_revision:870; }","duration":"171.525319ms","start":"2024-08-05T23:08:01.877249Z","end":"2024-08-05T23:08:02.048774Z","steps":["trace[21918549] 'agreement among raft nodes before linearized reading'  (duration: 171.287079ms)"],"step_count":1}
	
	
	==> etcd [c4851c0f61cd98c1555b57deca59545b1355fee71e3b42a82ef298d1e97f9acd] <==
	{"level":"info","ts":"2024-08-05T23:05:13.437783Z","caller":"embed/etcd.go:277","msg":"now serving peer/client/metrics","local-member-id":"dc6e2f4e9dcc679a","initial-advertise-peer-urls":["https://192.168.39.190:2380"],"listen-peer-urls":["https://192.168.39.190:2380"],"advertise-client-urls":["https://192.168.39.190:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.168.39.190:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2024-08-05T23:05:14.560788Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"dc6e2f4e9dcc679a is starting a new election at term 2"}
	{"level":"info","ts":"2024-08-05T23:05:14.567747Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"dc6e2f4e9dcc679a became pre-candidate at term 2"}
	{"level":"info","ts":"2024-08-05T23:05:14.567814Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"dc6e2f4e9dcc679a received MsgPreVoteResp from dc6e2f4e9dcc679a at term 2"}
	{"level":"info","ts":"2024-08-05T23:05:14.567828Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"dc6e2f4e9dcc679a became candidate at term 3"}
	{"level":"info","ts":"2024-08-05T23:05:14.567834Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"dc6e2f4e9dcc679a received MsgVoteResp from dc6e2f4e9dcc679a at term 3"}
	{"level":"info","ts":"2024-08-05T23:05:14.567845Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"dc6e2f4e9dcc679a became leader at term 3"}
	{"level":"info","ts":"2024-08-05T23:05:14.567853Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: dc6e2f4e9dcc679a elected leader dc6e2f4e9dcc679a at term 3"}
	{"level":"info","ts":"2024-08-05T23:05:14.573356Z","caller":"etcdserver/server.go:2068","msg":"published local member to cluster through raft","local-member-id":"dc6e2f4e9dcc679a","local-member-attributes":"{Name:functional-299463 ClientURLs:[https://192.168.39.190:2379]}","request-path":"/0/members/dc6e2f4e9dcc679a/attributes","cluster-id":"22dc5a3adec033ed","publish-timeout":"7s"}
	{"level":"info","ts":"2024-08-05T23:05:14.573394Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-08-05T23:05:14.573967Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-08-05T23:05:14.575609Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2024-08-05T23:05:14.577027Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.39.190:2379"}
	{"level":"info","ts":"2024-08-05T23:05:14.577862Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2024-08-05T23:05:14.577894Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2024-08-05T23:05:44.186399Z","caller":"osutil/interrupt_unix.go:64","msg":"received signal; shutting down","signal":"terminated"}
	{"level":"info","ts":"2024-08-05T23:05:44.186469Z","caller":"embed/etcd.go:375","msg":"closing etcd server","name":"functional-299463","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.39.190:2380"],"advertise-client-urls":["https://192.168.39.190:2379"]}
	{"level":"warn","ts":"2024-08-05T23:05:44.186537Z","caller":"embed/serve.go:212","msg":"stopping secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
	{"level":"warn","ts":"2024-08-05T23:05:44.186621Z","caller":"embed/serve.go:214","msg":"stopped secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
	{"level":"warn","ts":"2024-08-05T23:05:44.277398Z","caller":"embed/serve.go:212","msg":"stopping secure grpc server due to error","error":"accept tcp 192.168.39.190:2379: use of closed network connection"}
	{"level":"warn","ts":"2024-08-05T23:05:44.277507Z","caller":"embed/serve.go:214","msg":"stopped secure grpc server due to error","error":"accept tcp 192.168.39.190:2379: use of closed network connection"}
	{"level":"info","ts":"2024-08-05T23:05:44.277564Z","caller":"etcdserver/server.go:1471","msg":"skipped leadership transfer for single voting member cluster","local-member-id":"dc6e2f4e9dcc679a","current-leader-member-id":"dc6e2f4e9dcc679a"}
	{"level":"info","ts":"2024-08-05T23:05:44.281335Z","caller":"embed/etcd.go:579","msg":"stopping serving peer traffic","address":"192.168.39.190:2380"}
	{"level":"info","ts":"2024-08-05T23:05:44.281539Z","caller":"embed/etcd.go:584","msg":"stopped serving peer traffic","address":"192.168.39.190:2380"}
	{"level":"info","ts":"2024-08-05T23:05:44.281586Z","caller":"embed/etcd.go:377","msg":"closed etcd server","name":"functional-299463","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.39.190:2380"],"advertise-client-urls":["https://192.168.39.190:2379"]}
	
	
	==> kernel <==
	 23:09:58 up 6 min,  0 users,  load average: 0.32, 0.59, 0.29
	Linux functional-299463 5.10.207 #1 SMP Mon Jul 29 15:19:02 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kube-apiserver [14fa5fcc0776ac19d40e962d58b8fac9f3c423d61800193586b1c6c8082ff98b] <==
	I0805 23:06:07.173047       1 options.go:221] external host was not specified, using 192.168.39.190
	I0805 23:06:07.174099       1 server.go:148] Version: v1.30.3
	I0805 23:06:07.174195       1 server.go:150] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	E0805 23:06:07.174630       1 run.go:74] "command failed" err="failed to create listener: failed to listen on 0.0.0.0:8441: listen tcp 0.0.0.0:8441: bind: address already in use"
	
	
	==> kube-apiserver [52cb4fb0ea6086aeacb1791efb6ed966ed056f7760ce10261bf8f971d0aab296] <==
	I0805 23:06:30.525619       1 apf_controller.go:379] Running API Priority and Fairness config worker
	I0805 23:06:30.525695       1 apf_controller.go:382] Running API Priority and Fairness periodic rebalancing process
	I0805 23:06:30.527892       1 shared_informer.go:320] Caches are synced for configmaps
	I0805 23:06:30.529109       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I0805 23:06:30.549564       1 shared_informer.go:320] Caches are synced for cluster_authentication_trust_controller
	I0805 23:06:30.550523       1 cache.go:39] Caches are synced for AvailableConditionController controller
	I0805 23:06:30.551897       1 handler_discovery.go:447] Starting ResourceDiscoveryManager
	I0805 23:06:30.560777       1 shared_informer.go:320] Caches are synced for node_authorizer
	I0805 23:06:31.410432       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	W0805 23:06:31.669942       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.168.39.190]
	I0805 23:06:31.671128       1 controller.go:615] quota admission added evaluator for: endpoints
	I0805 23:06:31.676058       1 controller.go:615] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I0805 23:06:46.376114       1 alloc.go:330] "allocated clusterIPs" service="default/invalid-svc" clusterIPs={"IPv4":"10.110.191.185"}
	I0805 23:06:49.828362       1 controller.go:615] quota admission added evaluator for: deployments.apps
	I0805 23:06:49.836626       1 controller.go:615] quota admission added evaluator for: replicasets.apps
	I0805 23:06:49.962971       1 alloc.go:330] "allocated clusterIPs" service="default/hello-node" clusterIPs={"IPv4":"10.104.139.179"}
	I0805 23:06:51.193486       1 alloc.go:330] "allocated clusterIPs" service="default/hello-node-connect" clusterIPs={"IPv4":"10.97.224.22"}
	I0805 23:07:00.602010       1 alloc.go:330] "allocated clusterIPs" service="default/mysql" clusterIPs={"IPv4":"10.105.203.80"}
	E0805 23:07:46.807870       1 conn.go:339] Error on socket receive: read tcp 192.168.39.190:8441->192.168.39.1:48916: use of closed network connection
	I0805 23:07:53.211005       1 controller.go:615] quota admission added evaluator for: namespaces
	I0805 23:07:53.236882       1 controller.go:615] quota admission added evaluator for: serviceaccounts
	I0805 23:07:53.396502       1 controller.go:615] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I0805 23:07:53.425250       1 controller.go:615] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I0805 23:07:53.729879       1 alloc.go:330] "allocated clusterIPs" service="kubernetes-dashboard/kubernetes-dashboard" clusterIPs={"IPv4":"10.105.224.180"}
	I0805 23:07:53.764485       1 alloc.go:330] "allocated clusterIPs" service="kubernetes-dashboard/dashboard-metrics-scraper" clusterIPs={"IPv4":"10.97.171.250"}
	
	
	==> kube-controller-manager [8112526f29a49da40c17b7003db2ebb0d40cdfb8f7b3ab3c2500879640a9e79b] <==
	I0805 23:05:28.421265       1 shared_informer.go:320] Caches are synced for ReplicationController
	I0805 23:05:28.422558       1 shared_informer.go:320] Caches are synced for validatingadmissionpolicy-status
	I0805 23:05:28.422683       1 shared_informer.go:320] Caches are synced for ephemeral
	I0805 23:05:28.427318       1 shared_informer.go:320] Caches are synced for disruption
	I0805 23:05:28.429215       1 shared_informer.go:320] Caches are synced for bootstrap_signer
	I0805 23:05:28.431722       1 shared_informer.go:320] Caches are synced for endpoint
	I0805 23:05:28.434622       1 shared_informer.go:320] Caches are synced for legacy-service-account-token-cleaner
	I0805 23:05:28.436701       1 shared_informer.go:320] Caches are synced for node
	I0805 23:05:28.436912       1 range_allocator.go:175] "Sending events to api server" logger="node-ipam-controller"
	I0805 23:05:28.436963       1 range_allocator.go:179] "Starting range CIDR allocator" logger="node-ipam-controller"
	I0805 23:05:28.436986       1 shared_informer.go:313] Waiting for caches to sync for cidrallocator
	I0805 23:05:28.437014       1 shared_informer.go:320] Caches are synced for cidrallocator
	I0805 23:05:28.440220       1 shared_informer.go:320] Caches are synced for PV protection
	I0805 23:05:28.442820       1 shared_informer.go:320] Caches are synced for GC
	I0805 23:05:28.470588       1 shared_informer.go:320] Caches are synced for endpoint_slice_mirroring
	I0805 23:05:28.473003       1 shared_informer.go:320] Caches are synced for persistent volume
	I0805 23:05:28.516869       1 shared_informer.go:320] Caches are synced for attach detach
	I0805 23:05:28.600566       1 shared_informer.go:320] Caches are synced for resource quota
	I0805 23:05:28.632600       1 shared_informer.go:320] Caches are synced for resource quota
	I0805 23:05:28.653609       1 shared_informer.go:320] Caches are synced for job
	I0805 23:05:28.664411       1 shared_informer.go:320] Caches are synced for TTL after finished
	I0805 23:05:28.675880       1 shared_informer.go:320] Caches are synced for cronjob
	I0805 23:05:29.049721       1 shared_informer.go:320] Caches are synced for garbage collector
	I0805 23:05:29.049766       1 garbagecollector.go:157] "All resource monitors have synced. Proceeding to collect garbage" logger="garbage-collector-controller"
	I0805 23:05:29.057128       1 shared_informer.go:320] Caches are synced for garbage collector
	
	
	==> kube-controller-manager [e04b5c8429e3c806ce7e14a70a8c90816ec5add9f2ba0c67f5cc57dc598fd7dc] <==
	E0805 23:07:53.394134       1 replica_set.go:557] sync "kubernetes-dashboard/dashboard-metrics-scraper-b5fc48f67" failed with pods "dashboard-metrics-scraper-b5fc48f67-" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount "kubernetes-dashboard" not found
	I0805 23:07:53.407834       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-b5fc48f67" duration="13.669561ms"
	E0805 23:07:53.407911       1 replica_set.go:557] sync "kubernetes-dashboard/dashboard-metrics-scraper-b5fc48f67" failed with pods "dashboard-metrics-scraper-b5fc48f67-" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount "kubernetes-dashboard" not found
	I0805 23:07:53.425044       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kubernetes-dashboard/kubernetes-dashboard-779776cb65" duration="68.959943ms"
	E0805 23:07:53.425141       1 replica_set.go:557] sync "kubernetes-dashboard/kubernetes-dashboard-779776cb65" failed with pods "kubernetes-dashboard-779776cb65-" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount "kubernetes-dashboard" not found
	I0805 23:07:53.454423       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-b5fc48f67" duration="46.468065ms"
	E0805 23:07:53.454481       1 replica_set.go:557] sync "kubernetes-dashboard/dashboard-metrics-scraper-b5fc48f67" failed with pods "dashboard-metrics-scraper-b5fc48f67-" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount "kubernetes-dashboard" not found
	I0805 23:07:53.454853       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kubernetes-dashboard/kubernetes-dashboard-779776cb65" duration="29.687047ms"
	E0805 23:07:53.454868       1 replica_set.go:557] sync "kubernetes-dashboard/kubernetes-dashboard-779776cb65" failed with pods "kubernetes-dashboard-779776cb65-" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount "kubernetes-dashboard" not found
	I0805 23:07:53.470138       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-b5fc48f67" duration="15.627885ms"
	E0805 23:07:53.470203       1 replica_set.go:557] sync "kubernetes-dashboard/dashboard-metrics-scraper-b5fc48f67" failed with pods "dashboard-metrics-scraper-b5fc48f67-" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount "kubernetes-dashboard" not found
	I0805 23:07:53.478517       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kubernetes-dashboard/kubernetes-dashboard-779776cb65" duration="19.079178ms"
	E0805 23:07:53.478564       1 replica_set.go:557] sync "kubernetes-dashboard/kubernetes-dashboard-779776cb65" failed with pods "kubernetes-dashboard-779776cb65-" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount "kubernetes-dashboard" not found
	I0805 23:07:53.510142       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kubernetes-dashboard/kubernetes-dashboard-779776cb65" duration="31.546891ms"
	I0805 23:07:53.538185       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kubernetes-dashboard/kubernetes-dashboard-779776cb65" duration="27.972962ms"
	I0805 23:07:53.538281       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kubernetes-dashboard/kubernetes-dashboard-779776cb65" duration="48.312µs"
	I0805 23:07:53.554563       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-b5fc48f67" duration="58.313018ms"
	I0805 23:07:53.603085       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-b5fc48f67" duration="48.471301ms"
	I0805 23:07:53.603178       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-b5fc48f67" duration="37.082µs"
	I0805 23:07:53.604617       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kubernetes-dashboard/kubernetes-dashboard-779776cb65" duration="28.296µs"
	I0805 23:07:53.654231       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-b5fc48f67" duration="39.093µs"
	I0805 23:08:02.715194       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kubernetes-dashboard/kubernetes-dashboard-779776cb65" duration="11.434222ms"
	I0805 23:08:02.715572       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kubernetes-dashboard/kubernetes-dashboard-779776cb65" duration="157.051µs"
	I0805 23:08:06.748407       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-b5fc48f67" duration="14.623887ms"
	I0805 23:08:06.748850       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-b5fc48f67" duration="54.155µs"
	
	
	==> kube-proxy [3b7a6329469a8daa1f0052586315c5f052fb9ad4c9f229d30a4cd4f43fddb451] <==
	I0805 23:06:07.235839       1 config.go:101] "Starting endpoint slice config controller"
	I0805 23:06:07.235907       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0805 23:06:07.236366       1 config.go:319] "Starting node config controller"
	I0805 23:06:07.238527       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0805 23:06:07.336685       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I0805 23:06:07.336706       1 shared_informer.go:320] Caches are synced for service config
	I0805 23:06:07.338902       1 shared_informer.go:320] Caches are synced for node config
	W0805 23:06:07.645850       1 reflector.go:470] k8s.io/client-go/informers/factory.go:160: watch of *v1.Service ended with: very short watch: k8s.io/client-go/informers/factory.go:160: Unexpected watch close - watch lasted less than a second and no items received
	W0805 23:06:07.646557       1 reflector.go:470] k8s.io/client-go/informers/factory.go:160: watch of *v1.Node ended with: very short watch: k8s.io/client-go/informers/factory.go:160: Unexpected watch close - watch lasted less than a second and no items received
	W0805 23:06:08.943113       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8441/api/v1/nodes?fieldSelector=metadata.name%!D(MISSING)functional-299463&resourceVersion=557": dial tcp 192.168.39.190:8441: connect: connection refused
	E0805 23:06:08.943198       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8441/api/v1/nodes?fieldSelector=metadata.name%!D(MISSING)functional-299463&resourceVersion=557": dial tcp 192.168.39.190:8441: connect: connection refused
	W0805 23:06:08.948844       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8441/api/v1/services?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=548": dial tcp 192.168.39.190:8441: connect: connection refused
	E0805 23:06:08.948905       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8441/api/v1/services?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=548": dial tcp 192.168.39.190:8441: connect: connection refused
	W0805 23:06:11.117589       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8441/api/v1/services?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=548": dial tcp 192.168.39.190:8441: connect: connection refused
	E0805 23:06:11.117695       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8441/api/v1/services?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=548": dial tcp 192.168.39.190:8441: connect: connection refused
	W0805 23:06:11.522384       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8441/api/v1/nodes?fieldSelector=metadata.name%!D(MISSING)functional-299463&resourceVersion=557": dial tcp 192.168.39.190:8441: connect: connection refused
	E0805 23:06:11.522529       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8441/api/v1/nodes?fieldSelector=metadata.name%!D(MISSING)functional-299463&resourceVersion=557": dial tcp 192.168.39.190:8441: connect: connection refused
	W0805 23:06:16.393339       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8441/api/v1/services?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=548": dial tcp 192.168.39.190:8441: connect: connection refused
	E0805 23:06:16.393455       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8441/api/v1/services?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=548": dial tcp 192.168.39.190:8441: connect: connection refused
	W0805 23:06:16.400029       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8441/api/v1/nodes?fieldSelector=metadata.name%!D(MISSING)functional-299463&resourceVersion=557": dial tcp 192.168.39.190:8441: connect: connection refused
	E0805 23:06:16.400127       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8441/api/v1/nodes?fieldSelector=metadata.name%!D(MISSING)functional-299463&resourceVersion=557": dial tcp 192.168.39.190:8441: connect: connection refused
	W0805 23:06:26.042084       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8441/api/v1/services?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=548": dial tcp 192.168.39.190:8441: connect: connection refused
	E0805 23:06:26.042303       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8441/api/v1/services?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=548": dial tcp 192.168.39.190:8441: connect: connection refused
	W0805 23:06:28.740902       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8441/api/v1/nodes?fieldSelector=metadata.name%!D(MISSING)functional-299463&resourceVersion=557": dial tcp 192.168.39.190:8441: connect: connection refused
	E0805 23:06:28.741049       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8441/api/v1/nodes?fieldSelector=metadata.name%!D(MISSING)functional-299463&resourceVersion=557": dial tcp 192.168.39.190:8441: connect: connection refused
	
	
	==> kube-proxy [ba33cf41bc99a98eeb2f4f77ecd320e237c376af51933411e5636a965e756797] <==
	I0805 23:05:53.378187       1 server_linux.go:69] "Using iptables proxy"
	E0805 23:05:53.382399       1 server.go:1051] "Failed to retrieve node info" err="Get \"https://control-plane.minikube.internal:8441/api/v1/nodes/functional-299463\": dial tcp 192.168.39.190:8441: connect: connection refused"
	E0805 23:05:54.552198       1 server.go:1051] "Failed to retrieve node info" err="Get \"https://control-plane.minikube.internal:8441/api/v1/nodes/functional-299463\": dial tcp 192.168.39.190:8441: connect: connection refused"
	
	
	==> kube-scheduler [28788bc4ca8db465ec428c389ce83400231592ff8d617aecf0326d829b97361b] <==
	I0805 23:05:55.902496       1 serving.go:380] Generated self-signed cert in-memory
	W0805 23:06:04.505817       1 requestheader_controller.go:193] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W0805 23:06:04.505920       1 authentication.go:368] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W0805 23:06:04.505947       1 authentication.go:369] Continuing without authentication configuration. This may treat all requests as anonymous.
	W0805 23:06:04.505954       1 authentication.go:370] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I0805 23:06:04.568695       1 server.go:154] "Starting Kubernetes Scheduler" version="v1.30.3"
	I0805 23:06:04.570169       1 server.go:156] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0805 23:06:04.575063       1 secure_serving.go:213] Serving securely on 127.0.0.1:10259
	I0805 23:06:04.575421       1 configmap_cafile_content.go:202] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0805 23:06:04.575536       1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0805 23:06:04.575613       1 tlsconfig.go:240] "Starting DynamicServingCertificateController"
	I0805 23:06:04.676529       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	E0805 23:06:30.435979       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolumeClaim: unknown (get persistentvolumeclaims)
	
	
	==> kube-scheduler [84ed7a9ec1281dc1a43f723c40a7a6c52fc9a9f9f4f426954587e8882775fb0a] <==
	E0805 23:05:15.960499       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	W0805 23:05:15.960389       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0805 23:05:15.960591       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	W0805 23:05:15.963180       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0805 23:05:15.963288       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	W0805 23:05:15.963717       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E0805 23:05:15.963823       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	W0805 23:05:15.964028       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0805 23:05:15.964057       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	W0805 23:05:15.964234       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0805 23:05:15.964263       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	W0805 23:05:15.964386       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0805 23:05:15.964470       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	W0805 23:05:15.964580       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0805 23:05:15.965238       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	W0805 23:05:15.964945       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E0805 23:05:15.965276       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	W0805 23:05:15.965053       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0805 23:05:15.965366       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	W0805 23:05:15.965155       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E0805 23:05:15.965379       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	W0805 23:05:15.964749       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0805 23:05:15.967855       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	I0805 23:05:16.839504       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	E0805 23:05:44.190711       1 run.go:74] "command failed" err="finished without leader elect"
	
	
	==> kubelet <==
	Aug 05 23:07:54 functional-299463 kubelet[5704]: E0805 23:07:54.715330    5704 projected.go:294] Couldn't get configMap kubernetes-dashboard/kube-root-ca.crt: failed to sync configmap cache: timed out waiting for the condition
	Aug 05 23:07:54 functional-299463 kubelet[5704]: E0805 23:07:54.715385    5704 projected.go:200] Error preparing data for projected volume kube-api-access-9qtnv for pod kubernetes-dashboard/kubernetes-dashboard-779776cb65-xzpg6: failed to sync configmap cache: timed out waiting for the condition
	Aug 05 23:07:54 functional-299463 kubelet[5704]: E0805 23:07:54.715474    5704 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/8f477a33-cee8-48d7-8b33-9779a2323a9f-kube-api-access-9qtnv podName:8f477a33-cee8-48d7-8b33-9779a2323a9f nodeName:}" failed. No retries permitted until 2024-08-05 23:07:55.215450229 +0000 UTC m=+109.926645087 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-9qtnv" (UniqueName: "kubernetes.io/projected/8f477a33-cee8-48d7-8b33-9779a2323a9f-kube-api-access-9qtnv") pod "kubernetes-dashboard-779776cb65-xzpg6" (UID: "8f477a33-cee8-48d7-8b33-9779a2323a9f") : failed to sync configmap cache: timed out waiting for the condition
	Aug 05 23:07:54 functional-299463 kubelet[5704]: E0805 23:07:54.815414    5704 projected.go:294] Couldn't get configMap kubernetes-dashboard/kube-root-ca.crt: failed to sync configmap cache: timed out waiting for the condition
	Aug 05 23:07:54 functional-299463 kubelet[5704]: E0805 23:07:54.815463    5704 projected.go:200] Error preparing data for projected volume kube-api-access-q6dg7 for pod kubernetes-dashboard/dashboard-metrics-scraper-b5fc48f67-7cgnf: failed to sync configmap cache: timed out waiting for the condition
	Aug 05 23:07:54 functional-299463 kubelet[5704]: E0805 23:07:54.815549    5704 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/0decaeed-b26f-4dff-bdb3-f5bf8ff0f09f-kube-api-access-q6dg7 podName:0decaeed-b26f-4dff-bdb3-f5bf8ff0f09f nodeName:}" failed. No retries permitted until 2024-08-05 23:07:55.315518177 +0000 UTC m=+110.026713038 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-q6dg7" (UniqueName: "kubernetes.io/projected/0decaeed-b26f-4dff-bdb3-f5bf8ff0f09f-kube-api-access-q6dg7") pod "dashboard-metrics-scraper-b5fc48f67-7cgnf" (UID: "0decaeed-b26f-4dff-bdb3-f5bf8ff0f09f") : failed to sync configmap cache: timed out waiting for the condition
	Aug 05 23:07:55 functional-299463 kubelet[5704]: I0805 23:07:55.806312    5704 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"test-volume\" (UniqueName: \"kubernetes.io/host-path/e36767a8-7330-4158-842f-8949ab1a0d95-test-volume\") pod \"e36767a8-7330-4158-842f-8949ab1a0d95\" (UID: \"e36767a8-7330-4158-842f-8949ab1a0d95\") "
	Aug 05 23:07:55 functional-299463 kubelet[5704]: I0805 23:07:55.806410    5704 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gzv98\" (UniqueName: \"kubernetes.io/projected/e36767a8-7330-4158-842f-8949ab1a0d95-kube-api-access-gzv98\") pod \"e36767a8-7330-4158-842f-8949ab1a0d95\" (UID: \"e36767a8-7330-4158-842f-8949ab1a0d95\") "
	Aug 05 23:07:55 functional-299463 kubelet[5704]: I0805 23:07:55.806705    5704 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/e36767a8-7330-4158-842f-8949ab1a0d95-test-volume" (OuterVolumeSpecName: "test-volume") pod "e36767a8-7330-4158-842f-8949ab1a0d95" (UID: "e36767a8-7330-4158-842f-8949ab1a0d95"). InnerVolumeSpecName "test-volume". PluginName "kubernetes.io/host-path", VolumeGidValue ""
	Aug 05 23:07:55 functional-299463 kubelet[5704]: I0805 23:07:55.810232    5704 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e36767a8-7330-4158-842f-8949ab1a0d95-kube-api-access-gzv98" (OuterVolumeSpecName: "kube-api-access-gzv98") pod "e36767a8-7330-4158-842f-8949ab1a0d95" (UID: "e36767a8-7330-4158-842f-8949ab1a0d95"). InnerVolumeSpecName "kube-api-access-gzv98". PluginName "kubernetes.io/projected", VolumeGidValue ""
	Aug 05 23:07:55 functional-299463 kubelet[5704]: I0805 23:07:55.906712    5704 reconciler_common.go:289] "Volume detached for volume \"test-volume\" (UniqueName: \"kubernetes.io/host-path/e36767a8-7330-4158-842f-8949ab1a0d95-test-volume\") on node \"functional-299463\" DevicePath \"\""
	Aug 05 23:07:55 functional-299463 kubelet[5704]: I0805 23:07:55.906754    5704 reconciler_common.go:289] "Volume detached for volume \"kube-api-access-gzv98\" (UniqueName: \"kubernetes.io/projected/e36767a8-7330-4158-842f-8949ab1a0d95-kube-api-access-gzv98\") on node \"functional-299463\" DevicePath \"\""
	Aug 05 23:07:56 functional-299463 kubelet[5704]: I0805 23:07:56.644291    5704 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="d4b608b8e98bfaa672afadf8a0ef32fa44d72a8df54ea13687df4313623146ee"
	Aug 05 23:08:05 functional-299463 kubelet[5704]: E0805 23:08:05.636601    5704 iptables.go:577] "Could not set up iptables canary" err=<
	Aug 05 23:08:05 functional-299463 kubelet[5704]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Aug 05 23:08:05 functional-299463 kubelet[5704]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Aug 05 23:08:05 functional-299463 kubelet[5704]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Aug 05 23:08:05 functional-299463 kubelet[5704]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Aug 05 23:08:06 functional-299463 kubelet[5704]: I0805 23:08:06.732363    5704 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kubernetes-dashboard/kubernetes-dashboard-779776cb65-xzpg6" podStartSLOduration=7.051625283 podStartE2EDuration="13.732348491s" podCreationTimestamp="2024-08-05 23:07:53 +0000 UTC" firstStartedPulling="2024-08-05 23:07:55.534227553 +0000 UTC m=+110.245422424" lastFinishedPulling="2024-08-05 23:08:02.214950766 +0000 UTC m=+116.926145632" observedRunningTime="2024-08-05 23:08:02.703231284 +0000 UTC m=+117.414426163" watchObservedRunningTime="2024-08-05 23:08:06.732348491 +0000 UTC m=+121.443543370"
	Aug 05 23:08:06 functional-299463 kubelet[5704]: I0805 23:08:06.732464    5704 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kubernetes-dashboard/dashboard-metrics-scraper-b5fc48f67-7cgnf" podStartSLOduration=3.787460102 podStartE2EDuration="13.732459744s" podCreationTimestamp="2024-08-05 23:07:53 +0000 UTC" firstStartedPulling="2024-08-05 23:07:55.973253829 +0000 UTC m=+110.684448688" lastFinishedPulling="2024-08-05 23:08:05.918253468 +0000 UTC m=+120.629448330" observedRunningTime="2024-08-05 23:08:06.731475816 +0000 UTC m=+121.442670696" watchObservedRunningTime="2024-08-05 23:08:06.732459744 +0000 UTC m=+121.443654631"
	Aug 05 23:09:05 functional-299463 kubelet[5704]: E0805 23:09:05.634590    5704 iptables.go:577] "Could not set up iptables canary" err=<
	Aug 05 23:09:05 functional-299463 kubelet[5704]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Aug 05 23:09:05 functional-299463 kubelet[5704]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Aug 05 23:09:05 functional-299463 kubelet[5704]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Aug 05 23:09:05 functional-299463 kubelet[5704]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	
	
	==> kubernetes-dashboard [67f39af54dc24ccf4589e247873d596e93a6cef19dc58ea160fe44607a495560] <==
	2024/08/05 23:08:02 Starting overwatch
	2024/08/05 23:08:02 Using namespace: kubernetes-dashboard
	2024/08/05 23:08:02 Using in-cluster config to connect to apiserver
	2024/08/05 23:08:02 Using secret token for csrf signing
	2024/08/05 23:08:02 Initializing csrf token from kubernetes-dashboard-csrf secret
	2024/08/05 23:08:02 Empty token. Generating and storing in a secret kubernetes-dashboard-csrf
	2024/08/05 23:08:02 Successful initial request to the apiserver, version: v1.30.3
	2024/08/05 23:08:02 Generating JWE encryption key
	2024/08/05 23:08:02 New synchronizer has been registered: kubernetes-dashboard-key-holder-kubernetes-dashboard. Starting
	2024/08/05 23:08:02 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kubernetes-dashboard
	2024/08/05 23:08:02 Initializing JWE encryption key from synchronized object
	2024/08/05 23:08:02 Creating in-cluster Sidecar client
	2024/08/05 23:08:02 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2024/08/05 23:08:02 Serving insecurely on HTTP port: 9090
	2024/08/05 23:08:32 Successful request to sidecar
	
	
	==> storage-provisioner [948c1e01ffb6bf1cc5792dc9787ca1dde3c7df35251b8eec31b4dcdc24332cf6] <==
	I0805 23:05:16.750220       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0805 23:05:16.759540       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0805 23:05:16.759929       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I0805 23:05:34.163867       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0805 23:05:34.164044       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_functional-299463_6fd0aa00-4f8f-4dcf-9f12-c8b41b936b26!
	I0805 23:05:34.164039       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"2f099b56-0930-44b8-88f2-9f1a9b90cf5a", APIVersion:"v1", ResourceVersion:"540", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' functional-299463_6fd0aa00-4f8f-4dcf-9f12-c8b41b936b26 became leader
	I0805 23:05:34.265168       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_functional-299463_6fd0aa00-4f8f-4dcf-9f12-c8b41b936b26!
	
	
	==> storage-provisioner [aec549fbcbee0fb35b8eb4a7505b9d9a2d2481db92401e43e26bd2c0a7e824e9] <==
	E0805 23:06:10.633844       1 leaderelection.go:325] error retrieving resource lock kube-system/k8s.io-minikube-hostpath: Get "https://10.96.0.1:443/api/v1/namespaces/kube-system/endpoints/k8s.io-minikube-hostpath": dial tcp 10.96.0.1:443: connect: connection refused
	E0805 23:06:14.892449       1 leaderelection.go:325] error retrieving resource lock kube-system/k8s.io-minikube-hostpath: Get "https://10.96.0.1:443/api/v1/namespaces/kube-system/endpoints/k8s.io-minikube-hostpath": dial tcp 10.96.0.1:443: connect: connection refused
	E0805 23:06:18.488329       1 leaderelection.go:325] error retrieving resource lock kube-system/k8s.io-minikube-hostpath: Get "https://10.96.0.1:443/api/v1/namespaces/kube-system/endpoints/k8s.io-minikube-hostpath": dial tcp 10.96.0.1:443: connect: connection refused
	E0805 23:06:21.540464       1 leaderelection.go:325] error retrieving resource lock kube-system/k8s.io-minikube-hostpath: Get "https://10.96.0.1:443/api/v1/namespaces/kube-system/endpoints/k8s.io-minikube-hostpath": dial tcp 10.96.0.1:443: connect: connection refused
	E0805 23:06:24.560788       1 leaderelection.go:325] error retrieving resource lock kube-system/k8s.io-minikube-hostpath: Get "https://10.96.0.1:443/api/v1/namespaces/kube-system/endpoints/k8s.io-minikube-hostpath": dial tcp 10.96.0.1:443: connect: connection refused
	E0805 23:06:28.210243       1 leaderelection.go:325] error retrieving resource lock kube-system/k8s.io-minikube-hostpath: Get "https://10.96.0.1:443/api/v1/namespaces/kube-system/endpoints/k8s.io-minikube-hostpath": dial tcp 10.96.0.1:443: connect: connection refused
	E0805 23:06:30.370148       1 leaderelection.go:325] error retrieving resource lock kube-system/k8s.io-minikube-hostpath: Get "https://10.96.0.1:443/api/v1/namespaces/kube-system/endpoints/k8s.io-minikube-hostpath": dial tcp 10.96.0.1:443: connect: connection refused
	E0805 23:06:32.747237       1 leaderelection.go:325] error retrieving resource lock kube-system/k8s.io-minikube-hostpath: Get "https://10.96.0.1:443/api/v1/namespaces/kube-system/endpoints/k8s.io-minikube-hostpath": dial tcp 10.96.0.1:443: connect: connection refused
	E0805 23:06:34.980891       1 leaderelection.go:325] error retrieving resource lock kube-system/k8s.io-minikube-hostpath: Get "https://10.96.0.1:443/api/v1/namespaces/kube-system/endpoints/k8s.io-minikube-hostpath": dial tcp 10.96.0.1:443: connect: connection refused
	E0805 23:06:37.706306       1 leaderelection.go:325] error retrieving resource lock kube-system/k8s.io-minikube-hostpath: Get "https://10.96.0.1:443/api/v1/namespaces/kube-system/endpoints/k8s.io-minikube-hostpath": dial tcp 10.96.0.1:443: connect: connection refused
	E0805 23:06:40.943323       1 leaderelection.go:325] error retrieving resource lock kube-system/k8s.io-minikube-hostpath: Get "https://10.96.0.1:443/api/v1/namespaces/kube-system/endpoints/k8s.io-minikube-hostpath": dial tcp 10.96.0.1:443: connect: connection refused
	I0805 23:06:44.911262       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0805 23:06:44.911423       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_functional-299463_2c76c5f1-02ab-467f-893b-94121f0e31eb!
	I0805 23:06:44.912731       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"2f099b56-0930-44b8-88f2-9f1a9b90cf5a", APIVersion:"v1", ResourceVersion:"626", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' functional-299463_2c76c5f1-02ab-467f-893b-94121f0e31eb became leader
	I0805 23:06:45.012188       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_functional-299463_2c76c5f1-02ab-467f-893b-94121f0e31eb!
	I0805 23:06:55.778006       1 controller.go:1332] provision "default/myclaim" class "standard": started
	I0805 23:06:55.778069       1 storage_provisioner.go:61] Provisioning volume {&StorageClass{ObjectMeta:{standard    30fa7049-98ef-422a-a644-42255c3e04dc 393 0 2024-08-05 23:04:11 +0000 UTC <nil> <nil> map[addonmanager.kubernetes.io/mode:EnsureExists] map[kubectl.kubernetes.io/last-applied-configuration:{"apiVersion":"storage.k8s.io/v1","kind":"StorageClass","metadata":{"annotations":{"storageclass.kubernetes.io/is-default-class":"true"},"labels":{"addonmanager.kubernetes.io/mode":"EnsureExists"},"name":"standard"},"provisioner":"k8s.io/minikube-hostpath"}
	 storageclass.kubernetes.io/is-default-class:true] [] []  [{kubectl-client-side-apply Update storage.k8s.io/v1 2024-08-05 23:04:11 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:kubectl.kubernetes.io/last-applied-configuration":{},"f:storageclass.kubernetes.io/is-default-class":{}},"f:labels":{".":{},"f:addonmanager.kubernetes.io/mode":{}}},"f:provisioner":{},"f:reclaimPolicy":{},"f:volumeBindingMode":{}}}]},Provisioner:k8s.io/minikube-hostpath,Parameters:map[string]string{},ReclaimPolicy:*Delete,MountOptions:[],AllowVolumeExpansion:nil,VolumeBindingMode:*Immediate,AllowedTopologies:[]TopologySelectorTerm{},} pvc-111e8d8d-7a8e-4ade-a6fc-02c8d79bb45c &PersistentVolumeClaim{ObjectMeta:{myclaim  default  111e8d8d-7a8e-4ade-a6fc-02c8d79bb45c 681 0 2024-08-05 23:06:55 +0000 UTC <nil> <nil> map[] map[kubectl.kubernetes.io/last-applied-configuration:{"apiVersion":"v1","kind":"PersistentVolumeClaim","metadata":{"annotations":{},"name":"myclaim","namespace":"default"},"spec":{"accessModes":["Rea
dWriteOnce"],"resources":{"requests":{"storage":"500Mi"}},"volumeMode":"Filesystem"}}
	 volume.beta.kubernetes.io/storage-provisioner:k8s.io/minikube-hostpath volume.kubernetes.io/storage-provisioner:k8s.io/minikube-hostpath] [] [kubernetes.io/pvc-protection]  [{kube-controller-manager Update v1 2024-08-05 23:06:55 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:volume.beta.kubernetes.io/storage-provisioner":{},"f:volume.kubernetes.io/storage-provisioner":{}}}}} {kubectl-client-side-apply Update v1 2024-08-05 23:06:55 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:kubectl.kubernetes.io/last-applied-configuration":{}}},"f:spec":{"f:accessModes":{},"f:resources":{"f:requests":{".":{},"f:storage":{}}},"f:volumeMode":{}}}}]},Spec:PersistentVolumeClaimSpec{AccessModes:[ReadWriteOnce],Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{storage: {{524288000 0} {<nil>} 500Mi BinarySI},},},VolumeName:,Selector:nil,StorageClassName:*standard,VolumeMode:*Filesystem,DataSource:nil,},Status:PersistentVolumeClaimStatus{Phase:Pending,AccessModes:[],Capacity:
ResourceList{},Conditions:[]PersistentVolumeClaimCondition{},},} nil} to /tmp/hostpath-provisioner/default/myclaim
	I0805 23:06:55.778495       1 controller.go:1439] provision "default/myclaim" class "standard": volume "pvc-111e8d8d-7a8e-4ade-a6fc-02c8d79bb45c" provisioned
	I0805 23:06:55.778538       1 controller.go:1456] provision "default/myclaim" class "standard": succeeded
	I0805 23:06:55.778547       1 volume_store.go:212] Trying to save persistentvolume "pvc-111e8d8d-7a8e-4ade-a6fc-02c8d79bb45c"
	I0805 23:06:55.779546       1 event.go:282] Event(v1.ObjectReference{Kind:"PersistentVolumeClaim", Namespace:"default", Name:"myclaim", UID:"111e8d8d-7a8e-4ade-a6fc-02c8d79bb45c", APIVersion:"v1", ResourceVersion:"681", FieldPath:""}): type: 'Normal' reason: 'Provisioning' External provisioner is provisioning volume for claim "default/myclaim"
	I0805 23:06:55.807524       1 volume_store.go:219] persistentvolume "pvc-111e8d8d-7a8e-4ade-a6fc-02c8d79bb45c" saved
	I0805 23:06:55.808063       1 event.go:282] Event(v1.ObjectReference{Kind:"PersistentVolumeClaim", Namespace:"default", Name:"myclaim", UID:"111e8d8d-7a8e-4ade-a6fc-02c8d79bb45c", APIVersion:"v1", ResourceVersion:"681", FieldPath:""}): type: 'Normal' reason: 'ProvisioningSucceeded' Successfully provisioned volume pvc-111e8d8d-7a8e-4ade-a6fc-02c8d79bb45c
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p functional-299463 -n functional-299463
helpers_test.go:261: (dbg) Run:  kubectl --context functional-299463 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:272: non-running pods: busybox-mount sp-pod
helpers_test.go:274: ======> post-mortem[TestFunctional/parallel/PersistentVolumeClaim]: describe non-running pods <======
helpers_test.go:277: (dbg) Run:  kubectl --context functional-299463 describe pod busybox-mount sp-pod
helpers_test.go:282: (dbg) kubectl --context functional-299463 describe pod busybox-mount sp-pod:

                                                
                                                
-- stdout --
	Name:             busybox-mount
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             functional-299463/192.168.39.190
	Start Time:       Mon, 05 Aug 2024 23:07:49 +0000
	Labels:           integration-test=busybox-mount
	Annotations:      <none>
	Status:           Succeeded
	IP:               10.244.0.9
	IPs:
	  IP:  10.244.0.9
	Containers:
	  mount-munger:
	    Container ID:  cri-o://43259b8b6a9155da35d76a07687b7f685640c3a4bf4926d7589ddcc58c56976f
	    Image:         gcr.io/k8s-minikube/busybox:1.28.4-glibc
	    Image ID:      56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c
	    Port:          <none>
	    Host Port:     <none>
	    Command:
	      /bin/sh
	      -c
	      --
	    Args:
	      cat /mount-9p/created-by-test; echo test > /mount-9p/created-by-pod; rm /mount-9p/created-by-test-removed-by-pod; echo test > /mount-9p/created-by-pod-removed-by-test date >> /mount-9p/pod-dates
	    State:          Terminated
	      Reason:       Completed
	      Exit Code:    0
	      Started:      Mon, 05 Aug 2024 23:07:53 +0000
	      Finished:     Mon, 05 Aug 2024 23:07:53 +0000
	    Ready:          False
	    Restart Count:  0
	    Environment:    <none>
	    Mounts:
	      /mount-9p from test-volume (rw)
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-gzv98 (ro)
	Conditions:
	  Type                        Status
	  PodReadyToStartContainers   False 
	  Initialized                 True 
	  Ready                       False 
	  ContainersReady             False 
	  PodScheduled                True 
	Volumes:
	  test-volume:
	    Type:          HostPath (bare host directory volume)
	    Path:          /mount-9p
	    HostPathType:  
	  kube-api-access-gzv98:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    ConfigMapOptional:       <nil>
	    DownwardAPI:             true
	QoS Class:                   BestEffort
	Node-Selectors:              <none>
	Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:
	  Type    Reason     Age   From               Message
	  ----    ------     ----  ----               -------
	  Normal  Scheduled  2m9s  default-scheduler  Successfully assigned default/busybox-mount to functional-299463
	  Normal  Pulling    2m9s  kubelet            Pulling image "gcr.io/k8s-minikube/busybox:1.28.4-glibc"
	  Normal  Pulled     2m6s  kubelet            Successfully pulled image "gcr.io/k8s-minikube/busybox:1.28.4-glibc" in 2.984s (2.984s including waiting). Image size: 4631262 bytes.
	  Normal  Created    2m6s  kubelet            Created container mount-munger
	  Normal  Started    2m6s  kubelet            Started container mount-munger
	
	
	Name:             sp-pod
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             <none>
	Labels:           test=storage-provisioner
	Annotations:      <none>
	Status:           Pending
	IP:               
	IPs:              <none>
	Containers:
	  myfrontend:
	    Image:        docker.io/nginx
	    Port:         <none>
	    Host Port:    <none>
	    Environment:  <none>
	    Mounts:
	      /tmp/mount from mypd (rw)
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-65n2z (ro)
	Conditions:
	  Type           Status
	  PodScheduled   False 
	Volumes:
	  mypd:
	    Type:       PersistentVolumeClaim (a reference to a PersistentVolumeClaim in the same namespace)
	    ClaimName:  myclaim
	    ReadOnly:   false
	  kube-api-access-65n2z:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    ConfigMapOptional:       <nil>
	    DownwardAPI:             true
	QoS Class:                   BestEffort
	Node-Selectors:              <none>
	Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:
	  Type     Reason            Age    From               Message
	  ----     ------            ----   ----               -------
	  Warning  FailedScheduling  2m32s  default-scheduler  0/1 nodes are available: persistentvolume "pvc-111e8d8d-7a8e-4ade-a6fc-02c8d79bb45c" not found. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.
	  Warning  FailedScheduling  2m27s  default-scheduler  0/1 nodes are available: 1 node(s) unavailable due to one or more pvc(s) bound to non-existent pv(s). preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.

                                                
                                                
-- /stdout --
helpers_test.go:285: <<< TestFunctional/parallel/PersistentVolumeClaim FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
--- FAIL: TestFunctional/parallel/PersistentVolumeClaim (189.10s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StopSecondaryNode (141.99s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StopSecondaryNode
ha_test.go:363: (dbg) Run:  out/minikube-linux-amd64 -p ha-044175 node stop m02 -v=7 --alsologtostderr
E0805 23:16:49.981337   16792 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19373-9606/.minikube/profiles/functional-299463/client.crt: no such file or directory
ha_test.go:363: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ha-044175 node stop m02 -v=7 --alsologtostderr: exit status 30 (2m0.468383519s)

                                                
                                                
-- stdout --
	* Stopping node "ha-044175-m02"  ...

                                                
                                                
-- /stdout --
** stderr ** 
	I0805 23:14:55.141942   33401 out.go:291] Setting OutFile to fd 1 ...
	I0805 23:14:55.142183   33401 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0805 23:14:55.142198   33401 out.go:304] Setting ErrFile to fd 2...
	I0805 23:14:55.142204   33401 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0805 23:14:55.142377   33401 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19373-9606/.minikube/bin
	I0805 23:14:55.142649   33401 mustload.go:65] Loading cluster: ha-044175
	I0805 23:14:55.143002   33401 config.go:182] Loaded profile config "ha-044175": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0805 23:14:55.143024   33401 stop.go:39] StopHost: ha-044175-m02
	I0805 23:14:55.143463   33401 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0805 23:14:55.143520   33401 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0805 23:14:55.159442   33401 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40357
	I0805 23:14:55.159861   33401 main.go:141] libmachine: () Calling .GetVersion
	I0805 23:14:55.160372   33401 main.go:141] libmachine: Using API Version  1
	I0805 23:14:55.160395   33401 main.go:141] libmachine: () Calling .SetConfigRaw
	I0805 23:14:55.160703   33401 main.go:141] libmachine: () Calling .GetMachineName
	I0805 23:14:55.163135   33401 out.go:177] * Stopping node "ha-044175-m02"  ...
	I0805 23:14:55.164616   33401 machine.go:157] backing up vm config to /var/lib/minikube/backup: [/etc/cni /etc/kubernetes]
	I0805 23:14:55.164649   33401 main.go:141] libmachine: (ha-044175-m02) Calling .DriverName
	I0805 23:14:55.164818   33401 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/backup
	I0805 23:14:55.164842   33401 main.go:141] libmachine: (ha-044175-m02) Calling .GetSSHHostname
	I0805 23:14:55.167522   33401 main.go:141] libmachine: (ha-044175-m02) DBG | domain ha-044175-m02 has defined MAC address 52:54:00:84:bb:47 in network mk-ha-044175
	I0805 23:14:55.167926   33401 main.go:141] libmachine: (ha-044175-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:84:bb:47", ip: ""} in network mk-ha-044175: {Iface:virbr1 ExpiryTime:2024-08-06 00:11:13 +0000 UTC Type:0 Mac:52:54:00:84:bb:47 Iaid: IPaddr:192.168.39.112 Prefix:24 Hostname:ha-044175-m02 Clientid:01:52:54:00:84:bb:47}
	I0805 23:14:55.167949   33401 main.go:141] libmachine: (ha-044175-m02) DBG | domain ha-044175-m02 has defined IP address 192.168.39.112 and MAC address 52:54:00:84:bb:47 in network mk-ha-044175
	I0805 23:14:55.168104   33401 main.go:141] libmachine: (ha-044175-m02) Calling .GetSSHPort
	I0805 23:14:55.168291   33401 main.go:141] libmachine: (ha-044175-m02) Calling .GetSSHKeyPath
	I0805 23:14:55.168478   33401 main.go:141] libmachine: (ha-044175-m02) Calling .GetSSHUsername
	I0805 23:14:55.168658   33401 sshutil.go:53] new ssh client: &{IP:192.168.39.112 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19373-9606/.minikube/machines/ha-044175-m02/id_rsa Username:docker}
	I0805 23:14:55.254392   33401 ssh_runner.go:195] Run: sudo rsync --archive --relative /etc/cni /var/lib/minikube/backup
	I0805 23:14:55.308624   33401 ssh_runner.go:195] Run: sudo rsync --archive --relative /etc/kubernetes /var/lib/minikube/backup
	I0805 23:14:55.363785   33401 main.go:141] libmachine: Stopping "ha-044175-m02"...
	I0805 23:14:55.363848   33401 main.go:141] libmachine: (ha-044175-m02) Calling .GetState
	I0805 23:14:55.365386   33401 main.go:141] libmachine: (ha-044175-m02) Calling .Stop
	I0805 23:14:55.369030   33401 main.go:141] libmachine: (ha-044175-m02) Waiting for machine to stop 0/120
	I0805 23:14:56.370374   33401 main.go:141] libmachine: (ha-044175-m02) Waiting for machine to stop 1/120
	I0805 23:14:57.371796   33401 main.go:141] libmachine: (ha-044175-m02) Waiting for machine to stop 2/120
	I0805 23:14:58.374152   33401 main.go:141] libmachine: (ha-044175-m02) Waiting for machine to stop 3/120
	I0805 23:14:59.375913   33401 main.go:141] libmachine: (ha-044175-m02) Waiting for machine to stop 4/120
	I0805 23:15:00.377689   33401 main.go:141] libmachine: (ha-044175-m02) Waiting for machine to stop 5/120
	I0805 23:15:01.379640   33401 main.go:141] libmachine: (ha-044175-m02) Waiting for machine to stop 6/120
	I0805 23:15:02.381741   33401 main.go:141] libmachine: (ha-044175-m02) Waiting for machine to stop 7/120
	I0805 23:15:03.383238   33401 main.go:141] libmachine: (ha-044175-m02) Waiting for machine to stop 8/120
	I0805 23:15:04.384735   33401 main.go:141] libmachine: (ha-044175-m02) Waiting for machine to stop 9/120
	I0805 23:15:05.387177   33401 main.go:141] libmachine: (ha-044175-m02) Waiting for machine to stop 10/120
	I0805 23:15:06.388471   33401 main.go:141] libmachine: (ha-044175-m02) Waiting for machine to stop 11/120
	I0805 23:15:07.390103   33401 main.go:141] libmachine: (ha-044175-m02) Waiting for machine to stop 12/120
	I0805 23:15:08.391814   33401 main.go:141] libmachine: (ha-044175-m02) Waiting for machine to stop 13/120
	I0805 23:15:09.393608   33401 main.go:141] libmachine: (ha-044175-m02) Waiting for machine to stop 14/120
	I0805 23:15:10.395882   33401 main.go:141] libmachine: (ha-044175-m02) Waiting for machine to stop 15/120
	I0805 23:15:11.397635   33401 main.go:141] libmachine: (ha-044175-m02) Waiting for machine to stop 16/120
	I0805 23:15:12.399505   33401 main.go:141] libmachine: (ha-044175-m02) Waiting for machine to stop 17/120
	I0805 23:15:13.401963   33401 main.go:141] libmachine: (ha-044175-m02) Waiting for machine to stop 18/120
	I0805 23:15:14.403781   33401 main.go:141] libmachine: (ha-044175-m02) Waiting for machine to stop 19/120
	I0805 23:15:15.406156   33401 main.go:141] libmachine: (ha-044175-m02) Waiting for machine to stop 20/120
	I0805 23:15:16.408319   33401 main.go:141] libmachine: (ha-044175-m02) Waiting for machine to stop 21/120
	I0805 23:15:17.410205   33401 main.go:141] libmachine: (ha-044175-m02) Waiting for machine to stop 22/120
	I0805 23:15:18.411530   33401 main.go:141] libmachine: (ha-044175-m02) Waiting for machine to stop 23/120
	I0805 23:15:19.413050   33401 main.go:141] libmachine: (ha-044175-m02) Waiting for machine to stop 24/120
	I0805 23:15:20.415421   33401 main.go:141] libmachine: (ha-044175-m02) Waiting for machine to stop 25/120
	I0805 23:15:21.417863   33401 main.go:141] libmachine: (ha-044175-m02) Waiting for machine to stop 26/120
	I0805 23:15:22.419399   33401 main.go:141] libmachine: (ha-044175-m02) Waiting for machine to stop 27/120
	I0805 23:15:23.421466   33401 main.go:141] libmachine: (ha-044175-m02) Waiting for machine to stop 28/120
	I0805 23:15:24.422890   33401 main.go:141] libmachine: (ha-044175-m02) Waiting for machine to stop 29/120
	I0805 23:15:25.425426   33401 main.go:141] libmachine: (ha-044175-m02) Waiting for machine to stop 30/120
	I0805 23:15:26.426798   33401 main.go:141] libmachine: (ha-044175-m02) Waiting for machine to stop 31/120
	I0805 23:15:27.428642   33401 main.go:141] libmachine: (ha-044175-m02) Waiting for machine to stop 32/120
	I0805 23:15:28.430126   33401 main.go:141] libmachine: (ha-044175-m02) Waiting for machine to stop 33/120
	I0805 23:15:29.431424   33401 main.go:141] libmachine: (ha-044175-m02) Waiting for machine to stop 34/120
	I0805 23:15:30.433280   33401 main.go:141] libmachine: (ha-044175-m02) Waiting for machine to stop 35/120
	I0805 23:15:31.434669   33401 main.go:141] libmachine: (ha-044175-m02) Waiting for machine to stop 36/120
	I0805 23:15:32.435947   33401 main.go:141] libmachine: (ha-044175-m02) Waiting for machine to stop 37/120
	I0805 23:15:33.437523   33401 main.go:141] libmachine: (ha-044175-m02) Waiting for machine to stop 38/120
	I0805 23:15:34.438780   33401 main.go:141] libmachine: (ha-044175-m02) Waiting for machine to stop 39/120
	I0805 23:15:35.441007   33401 main.go:141] libmachine: (ha-044175-m02) Waiting for machine to stop 40/120
	I0805 23:15:36.442369   33401 main.go:141] libmachine: (ha-044175-m02) Waiting for machine to stop 41/120
	I0805 23:15:37.443737   33401 main.go:141] libmachine: (ha-044175-m02) Waiting for machine to stop 42/120
	I0805 23:15:38.445102   33401 main.go:141] libmachine: (ha-044175-m02) Waiting for machine to stop 43/120
	I0805 23:15:39.446466   33401 main.go:141] libmachine: (ha-044175-m02) Waiting for machine to stop 44/120
	I0805 23:15:40.448400   33401 main.go:141] libmachine: (ha-044175-m02) Waiting for machine to stop 45/120
	I0805 23:15:41.449710   33401 main.go:141] libmachine: (ha-044175-m02) Waiting for machine to stop 46/120
	I0805 23:15:42.451164   33401 main.go:141] libmachine: (ha-044175-m02) Waiting for machine to stop 47/120
	I0805 23:15:43.452536   33401 main.go:141] libmachine: (ha-044175-m02) Waiting for machine to stop 48/120
	I0805 23:15:44.454014   33401 main.go:141] libmachine: (ha-044175-m02) Waiting for machine to stop 49/120
	I0805 23:15:45.456237   33401 main.go:141] libmachine: (ha-044175-m02) Waiting for machine to stop 50/120
	I0805 23:15:46.458236   33401 main.go:141] libmachine: (ha-044175-m02) Waiting for machine to stop 51/120
	I0805 23:15:47.459883   33401 main.go:141] libmachine: (ha-044175-m02) Waiting for machine to stop 52/120
	I0805 23:15:48.461231   33401 main.go:141] libmachine: (ha-044175-m02) Waiting for machine to stop 53/120
	I0805 23:15:49.462578   33401 main.go:141] libmachine: (ha-044175-m02) Waiting for machine to stop 54/120
	I0805 23:15:50.464468   33401 main.go:141] libmachine: (ha-044175-m02) Waiting for machine to stop 55/120
	I0805 23:15:51.466272   33401 main.go:141] libmachine: (ha-044175-m02) Waiting for machine to stop 56/120
	I0805 23:15:52.467768   33401 main.go:141] libmachine: (ha-044175-m02) Waiting for machine to stop 57/120
	I0805 23:15:53.469044   33401 main.go:141] libmachine: (ha-044175-m02) Waiting for machine to stop 58/120
	I0805 23:15:54.470772   33401 main.go:141] libmachine: (ha-044175-m02) Waiting for machine to stop 59/120
	I0805 23:15:55.472599   33401 main.go:141] libmachine: (ha-044175-m02) Waiting for machine to stop 60/120
	I0805 23:15:56.474057   33401 main.go:141] libmachine: (ha-044175-m02) Waiting for machine to stop 61/120
	I0805 23:15:57.475634   33401 main.go:141] libmachine: (ha-044175-m02) Waiting for machine to stop 62/120
	I0805 23:15:58.477369   33401 main.go:141] libmachine: (ha-044175-m02) Waiting for machine to stop 63/120
	I0805 23:15:59.478606   33401 main.go:141] libmachine: (ha-044175-m02) Waiting for machine to stop 64/120
	I0805 23:16:00.480406   33401 main.go:141] libmachine: (ha-044175-m02) Waiting for machine to stop 65/120
	I0805 23:16:01.481535   33401 main.go:141] libmachine: (ha-044175-m02) Waiting for machine to stop 66/120
	I0805 23:16:02.483179   33401 main.go:141] libmachine: (ha-044175-m02) Waiting for machine to stop 67/120
	I0805 23:16:03.485578   33401 main.go:141] libmachine: (ha-044175-m02) Waiting for machine to stop 68/120
	I0805 23:16:04.486891   33401 main.go:141] libmachine: (ha-044175-m02) Waiting for machine to stop 69/120
	I0805 23:16:05.488468   33401 main.go:141] libmachine: (ha-044175-m02) Waiting for machine to stop 70/120
	I0805 23:16:06.490203   33401 main.go:141] libmachine: (ha-044175-m02) Waiting for machine to stop 71/120
	I0805 23:16:07.491715   33401 main.go:141] libmachine: (ha-044175-m02) Waiting for machine to stop 72/120
	I0805 23:16:08.493735   33401 main.go:141] libmachine: (ha-044175-m02) Waiting for machine to stop 73/120
	I0805 23:16:09.495444   33401 main.go:141] libmachine: (ha-044175-m02) Waiting for machine to stop 74/120
	I0805 23:16:10.497482   33401 main.go:141] libmachine: (ha-044175-m02) Waiting for machine to stop 75/120
	I0805 23:16:11.498753   33401 main.go:141] libmachine: (ha-044175-m02) Waiting for machine to stop 76/120
	I0805 23:16:12.500249   33401 main.go:141] libmachine: (ha-044175-m02) Waiting for machine to stop 77/120
	I0805 23:16:13.501498   33401 main.go:141] libmachine: (ha-044175-m02) Waiting for machine to stop 78/120
	I0805 23:16:14.503015   33401 main.go:141] libmachine: (ha-044175-m02) Waiting for machine to stop 79/120
	I0805 23:16:15.505364   33401 main.go:141] libmachine: (ha-044175-m02) Waiting for machine to stop 80/120
	I0805 23:16:16.506948   33401 main.go:141] libmachine: (ha-044175-m02) Waiting for machine to stop 81/120
	I0805 23:16:17.508296   33401 main.go:141] libmachine: (ha-044175-m02) Waiting for machine to stop 82/120
	I0805 23:16:18.509628   33401 main.go:141] libmachine: (ha-044175-m02) Waiting for machine to stop 83/120
	I0805 23:16:19.511918   33401 main.go:141] libmachine: (ha-044175-m02) Waiting for machine to stop 84/120
	I0805 23:16:20.513378   33401 main.go:141] libmachine: (ha-044175-m02) Waiting for machine to stop 85/120
	I0805 23:16:21.514593   33401 main.go:141] libmachine: (ha-044175-m02) Waiting for machine to stop 86/120
	I0805 23:16:22.515937   33401 main.go:141] libmachine: (ha-044175-m02) Waiting for machine to stop 87/120
	I0805 23:16:23.517403   33401 main.go:141] libmachine: (ha-044175-m02) Waiting for machine to stop 88/120
	I0805 23:16:24.518701   33401 main.go:141] libmachine: (ha-044175-m02) Waiting for machine to stop 89/120
	I0805 23:16:25.520703   33401 main.go:141] libmachine: (ha-044175-m02) Waiting for machine to stop 90/120
	I0805 23:16:26.522148   33401 main.go:141] libmachine: (ha-044175-m02) Waiting for machine to stop 91/120
	I0805 23:16:27.523520   33401 main.go:141] libmachine: (ha-044175-m02) Waiting for machine to stop 92/120
	I0805 23:16:28.524912   33401 main.go:141] libmachine: (ha-044175-m02) Waiting for machine to stop 93/120
	I0805 23:16:29.526381   33401 main.go:141] libmachine: (ha-044175-m02) Waiting for machine to stop 94/120
	I0805 23:16:30.528291   33401 main.go:141] libmachine: (ha-044175-m02) Waiting for machine to stop 95/120
	I0805 23:16:31.529687   33401 main.go:141] libmachine: (ha-044175-m02) Waiting for machine to stop 96/120
	I0805 23:16:32.531141   33401 main.go:141] libmachine: (ha-044175-m02) Waiting for machine to stop 97/120
	I0805 23:16:33.532589   33401 main.go:141] libmachine: (ha-044175-m02) Waiting for machine to stop 98/120
	I0805 23:16:34.534964   33401 main.go:141] libmachine: (ha-044175-m02) Waiting for machine to stop 99/120
	I0805 23:16:35.536559   33401 main.go:141] libmachine: (ha-044175-m02) Waiting for machine to stop 100/120
	I0805 23:16:36.539072   33401 main.go:141] libmachine: (ha-044175-m02) Waiting for machine to stop 101/120
	I0805 23:16:37.540365   33401 main.go:141] libmachine: (ha-044175-m02) Waiting for machine to stop 102/120
	I0805 23:16:38.541665   33401 main.go:141] libmachine: (ha-044175-m02) Waiting for machine to stop 103/120
	I0805 23:16:39.543093   33401 main.go:141] libmachine: (ha-044175-m02) Waiting for machine to stop 104/120
	I0805 23:16:40.545218   33401 main.go:141] libmachine: (ha-044175-m02) Waiting for machine to stop 105/120
	I0805 23:16:41.546555   33401 main.go:141] libmachine: (ha-044175-m02) Waiting for machine to stop 106/120
	I0805 23:16:42.548024   33401 main.go:141] libmachine: (ha-044175-m02) Waiting for machine to stop 107/120
	I0805 23:16:43.549438   33401 main.go:141] libmachine: (ha-044175-m02) Waiting for machine to stop 108/120
	I0805 23:16:44.550766   33401 main.go:141] libmachine: (ha-044175-m02) Waiting for machine to stop 109/120
	I0805 23:16:45.552670   33401 main.go:141] libmachine: (ha-044175-m02) Waiting for machine to stop 110/120
	I0805 23:16:46.555204   33401 main.go:141] libmachine: (ha-044175-m02) Waiting for machine to stop 111/120
	I0805 23:16:47.556650   33401 main.go:141] libmachine: (ha-044175-m02) Waiting for machine to stop 112/120
	I0805 23:16:48.558187   33401 main.go:141] libmachine: (ha-044175-m02) Waiting for machine to stop 113/120
	I0805 23:16:49.559608   33401 main.go:141] libmachine: (ha-044175-m02) Waiting for machine to stop 114/120
	I0805 23:16:50.561492   33401 main.go:141] libmachine: (ha-044175-m02) Waiting for machine to stop 115/120
	I0805 23:16:51.562991   33401 main.go:141] libmachine: (ha-044175-m02) Waiting for machine to stop 116/120
	I0805 23:16:52.564340   33401 main.go:141] libmachine: (ha-044175-m02) Waiting for machine to stop 117/120
	I0805 23:16:53.565625   33401 main.go:141] libmachine: (ha-044175-m02) Waiting for machine to stop 118/120
	I0805 23:16:54.566963   33401 main.go:141] libmachine: (ha-044175-m02) Waiting for machine to stop 119/120
	I0805 23:16:55.568217   33401 stop.go:66] stop err: unable to stop vm, current state "Running"
	W0805 23:16:55.568333   33401 out.go:239] X Failed to stop node m02: Temporary Error: stop: unable to stop vm, current state "Running"
	X Failed to stop node m02: Temporary Error: stop: unable to stop vm, current state "Running"

                                                
                                                
** /stderr **
ha_test.go:365: secondary control-plane node stop returned an error. args "out/minikube-linux-amd64 -p ha-044175 node stop m02 -v=7 --alsologtostderr": exit status 30
ha_test.go:369: (dbg) Run:  out/minikube-linux-amd64 -p ha-044175 status -v=7 --alsologtostderr
ha_test.go:369: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ha-044175 status -v=7 --alsologtostderr: exit status 3 (19.241790694s)

                                                
                                                
-- stdout --
	ha-044175
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-044175-m02
	type: Control Plane
	host: Error
	kubelet: Nonexistent
	apiserver: Nonexistent
	kubeconfig: Configured
	
	ha-044175-m03
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-044175-m04
	type: Worker
	host: Running
	kubelet: Running
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0805 23:16:55.610350   33832 out.go:291] Setting OutFile to fd 1 ...
	I0805 23:16:55.610461   33832 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0805 23:16:55.610469   33832 out.go:304] Setting ErrFile to fd 2...
	I0805 23:16:55.610473   33832 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0805 23:16:55.610655   33832 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19373-9606/.minikube/bin
	I0805 23:16:55.610889   33832 out.go:298] Setting JSON to false
	I0805 23:16:55.610920   33832 mustload.go:65] Loading cluster: ha-044175
	I0805 23:16:55.610953   33832 notify.go:220] Checking for updates...
	I0805 23:16:55.611270   33832 config.go:182] Loaded profile config "ha-044175": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0805 23:16:55.611284   33832 status.go:255] checking status of ha-044175 ...
	I0805 23:16:55.611657   33832 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0805 23:16:55.611715   33832 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0805 23:16:55.630265   33832 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35159
	I0805 23:16:55.630669   33832 main.go:141] libmachine: () Calling .GetVersion
	I0805 23:16:55.631365   33832 main.go:141] libmachine: Using API Version  1
	I0805 23:16:55.631399   33832 main.go:141] libmachine: () Calling .SetConfigRaw
	I0805 23:16:55.631714   33832 main.go:141] libmachine: () Calling .GetMachineName
	I0805 23:16:55.631889   33832 main.go:141] libmachine: (ha-044175) Calling .GetState
	I0805 23:16:55.633127   33832 status.go:330] ha-044175 host status = "Running" (err=<nil>)
	I0805 23:16:55.633154   33832 host.go:66] Checking if "ha-044175" exists ...
	I0805 23:16:55.633413   33832 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0805 23:16:55.633458   33832 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0805 23:16:55.647626   33832 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45835
	I0805 23:16:55.648005   33832 main.go:141] libmachine: () Calling .GetVersion
	I0805 23:16:55.648472   33832 main.go:141] libmachine: Using API Version  1
	I0805 23:16:55.648496   33832 main.go:141] libmachine: () Calling .SetConfigRaw
	I0805 23:16:55.648781   33832 main.go:141] libmachine: () Calling .GetMachineName
	I0805 23:16:55.648968   33832 main.go:141] libmachine: (ha-044175) Calling .GetIP
	I0805 23:16:55.651699   33832 main.go:141] libmachine: (ha-044175) DBG | domain ha-044175 has defined MAC address 52:54:00:d0:5f:e4 in network mk-ha-044175
	I0805 23:16:55.652206   33832 main.go:141] libmachine: (ha-044175) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d0:5f:e4", ip: ""} in network mk-ha-044175: {Iface:virbr1 ExpiryTime:2024-08-06 00:10:15 +0000 UTC Type:0 Mac:52:54:00:d0:5f:e4 Iaid: IPaddr:192.168.39.57 Prefix:24 Hostname:ha-044175 Clientid:01:52:54:00:d0:5f:e4}
	I0805 23:16:55.652247   33832 main.go:141] libmachine: (ha-044175) DBG | domain ha-044175 has defined IP address 192.168.39.57 and MAC address 52:54:00:d0:5f:e4 in network mk-ha-044175
	I0805 23:16:55.652394   33832 host.go:66] Checking if "ha-044175" exists ...
	I0805 23:16:55.652669   33832 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0805 23:16:55.652706   33832 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0805 23:16:55.667815   33832 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45859
	I0805 23:16:55.668279   33832 main.go:141] libmachine: () Calling .GetVersion
	I0805 23:16:55.668831   33832 main.go:141] libmachine: Using API Version  1
	I0805 23:16:55.668852   33832 main.go:141] libmachine: () Calling .SetConfigRaw
	I0805 23:16:55.669186   33832 main.go:141] libmachine: () Calling .GetMachineName
	I0805 23:16:55.669353   33832 main.go:141] libmachine: (ha-044175) Calling .DriverName
	I0805 23:16:55.669635   33832 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0805 23:16:55.669665   33832 main.go:141] libmachine: (ha-044175) Calling .GetSSHHostname
	I0805 23:16:55.672712   33832 main.go:141] libmachine: (ha-044175) DBG | domain ha-044175 has defined MAC address 52:54:00:d0:5f:e4 in network mk-ha-044175
	I0805 23:16:55.673065   33832 main.go:141] libmachine: (ha-044175) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d0:5f:e4", ip: ""} in network mk-ha-044175: {Iface:virbr1 ExpiryTime:2024-08-06 00:10:15 +0000 UTC Type:0 Mac:52:54:00:d0:5f:e4 Iaid: IPaddr:192.168.39.57 Prefix:24 Hostname:ha-044175 Clientid:01:52:54:00:d0:5f:e4}
	I0805 23:16:55.673083   33832 main.go:141] libmachine: (ha-044175) DBG | domain ha-044175 has defined IP address 192.168.39.57 and MAC address 52:54:00:d0:5f:e4 in network mk-ha-044175
	I0805 23:16:55.673221   33832 main.go:141] libmachine: (ha-044175) Calling .GetSSHPort
	I0805 23:16:55.673363   33832 main.go:141] libmachine: (ha-044175) Calling .GetSSHKeyPath
	I0805 23:16:55.673525   33832 main.go:141] libmachine: (ha-044175) Calling .GetSSHUsername
	I0805 23:16:55.673670   33832 sshutil.go:53] new ssh client: &{IP:192.168.39.57 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19373-9606/.minikube/machines/ha-044175/id_rsa Username:docker}
	I0805 23:16:55.759780   33832 ssh_runner.go:195] Run: systemctl --version
	I0805 23:16:55.767432   33832 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0805 23:16:55.787737   33832 kubeconfig.go:125] found "ha-044175" server: "https://192.168.39.254:8443"
	I0805 23:16:55.787764   33832 api_server.go:166] Checking apiserver status ...
	I0805 23:16:55.787823   33832 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0805 23:16:55.807506   33832 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1201/cgroup
	W0805 23:16:55.819647   33832 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1201/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0805 23:16:55.819700   33832 ssh_runner.go:195] Run: ls
	I0805 23:16:55.824897   33832 api_server.go:253] Checking apiserver healthz at https://192.168.39.254:8443/healthz ...
	I0805 23:16:55.829407   33832 api_server.go:279] https://192.168.39.254:8443/healthz returned 200:
	ok
	I0805 23:16:55.829434   33832 status.go:422] ha-044175 apiserver status = Running (err=<nil>)
	I0805 23:16:55.829446   33832 status.go:257] ha-044175 status: &{Name:ha-044175 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0805 23:16:55.829465   33832 status.go:255] checking status of ha-044175-m02 ...
	I0805 23:16:55.829753   33832 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0805 23:16:55.829796   33832 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0805 23:16:55.845271   33832 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42809
	I0805 23:16:55.845739   33832 main.go:141] libmachine: () Calling .GetVersion
	I0805 23:16:55.846353   33832 main.go:141] libmachine: Using API Version  1
	I0805 23:16:55.846376   33832 main.go:141] libmachine: () Calling .SetConfigRaw
	I0805 23:16:55.846731   33832 main.go:141] libmachine: () Calling .GetMachineName
	I0805 23:16:55.846920   33832 main.go:141] libmachine: (ha-044175-m02) Calling .GetState
	I0805 23:16:55.848801   33832 status.go:330] ha-044175-m02 host status = "Running" (err=<nil>)
	I0805 23:16:55.848818   33832 host.go:66] Checking if "ha-044175-m02" exists ...
	I0805 23:16:55.849133   33832 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0805 23:16:55.849189   33832 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0805 23:16:55.864394   33832 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43821
	I0805 23:16:55.864805   33832 main.go:141] libmachine: () Calling .GetVersion
	I0805 23:16:55.865298   33832 main.go:141] libmachine: Using API Version  1
	I0805 23:16:55.865324   33832 main.go:141] libmachine: () Calling .SetConfigRaw
	I0805 23:16:55.865619   33832 main.go:141] libmachine: () Calling .GetMachineName
	I0805 23:16:55.865753   33832 main.go:141] libmachine: (ha-044175-m02) Calling .GetIP
	I0805 23:16:55.868374   33832 main.go:141] libmachine: (ha-044175-m02) DBG | domain ha-044175-m02 has defined MAC address 52:54:00:84:bb:47 in network mk-ha-044175
	I0805 23:16:55.868729   33832 main.go:141] libmachine: (ha-044175-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:84:bb:47", ip: ""} in network mk-ha-044175: {Iface:virbr1 ExpiryTime:2024-08-06 00:11:13 +0000 UTC Type:0 Mac:52:54:00:84:bb:47 Iaid: IPaddr:192.168.39.112 Prefix:24 Hostname:ha-044175-m02 Clientid:01:52:54:00:84:bb:47}
	I0805 23:16:55.868751   33832 main.go:141] libmachine: (ha-044175-m02) DBG | domain ha-044175-m02 has defined IP address 192.168.39.112 and MAC address 52:54:00:84:bb:47 in network mk-ha-044175
	I0805 23:16:55.868909   33832 host.go:66] Checking if "ha-044175-m02" exists ...
	I0805 23:16:55.869310   33832 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0805 23:16:55.869344   33832 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0805 23:16:55.884116   33832 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33631
	I0805 23:16:55.884493   33832 main.go:141] libmachine: () Calling .GetVersion
	I0805 23:16:55.884967   33832 main.go:141] libmachine: Using API Version  1
	I0805 23:16:55.884991   33832 main.go:141] libmachine: () Calling .SetConfigRaw
	I0805 23:16:55.885317   33832 main.go:141] libmachine: () Calling .GetMachineName
	I0805 23:16:55.885504   33832 main.go:141] libmachine: (ha-044175-m02) Calling .DriverName
	I0805 23:16:55.885695   33832 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0805 23:16:55.885715   33832 main.go:141] libmachine: (ha-044175-m02) Calling .GetSSHHostname
	I0805 23:16:55.888529   33832 main.go:141] libmachine: (ha-044175-m02) DBG | domain ha-044175-m02 has defined MAC address 52:54:00:84:bb:47 in network mk-ha-044175
	I0805 23:16:55.888919   33832 main.go:141] libmachine: (ha-044175-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:84:bb:47", ip: ""} in network mk-ha-044175: {Iface:virbr1 ExpiryTime:2024-08-06 00:11:13 +0000 UTC Type:0 Mac:52:54:00:84:bb:47 Iaid: IPaddr:192.168.39.112 Prefix:24 Hostname:ha-044175-m02 Clientid:01:52:54:00:84:bb:47}
	I0805 23:16:55.888936   33832 main.go:141] libmachine: (ha-044175-m02) DBG | domain ha-044175-m02 has defined IP address 192.168.39.112 and MAC address 52:54:00:84:bb:47 in network mk-ha-044175
	I0805 23:16:55.889122   33832 main.go:141] libmachine: (ha-044175-m02) Calling .GetSSHPort
	I0805 23:16:55.889295   33832 main.go:141] libmachine: (ha-044175-m02) Calling .GetSSHKeyPath
	I0805 23:16:55.889501   33832 main.go:141] libmachine: (ha-044175-m02) Calling .GetSSHUsername
	I0805 23:16:55.889643   33832 sshutil.go:53] new ssh client: &{IP:192.168.39.112 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19373-9606/.minikube/machines/ha-044175-m02/id_rsa Username:docker}
	W0805 23:17:14.443305   33832 sshutil.go:64] dial failure (will retry): dial tcp 192.168.39.112:22: connect: no route to host
	W0805 23:17:14.443401   33832 start.go:268] error running df -h /var: NewSession: new client: new client: dial tcp 192.168.39.112:22: connect: no route to host
	E0805 23:17:14.443422   33832 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.39.112:22: connect: no route to host
	I0805 23:17:14.443430   33832 status.go:257] ha-044175-m02 status: &{Name:ha-044175-m02 Host:Error Kubelet:Nonexistent APIServer:Nonexistent Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	E0805 23:17:14.443464   33832 status.go:260] status error: NewSession: new client: new client: dial tcp 192.168.39.112:22: connect: no route to host
	I0805 23:17:14.443483   33832 status.go:255] checking status of ha-044175-m03 ...
	I0805 23:17:14.443813   33832 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0805 23:17:14.443872   33832 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0805 23:17:14.458551   33832 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41391
	I0805 23:17:14.459135   33832 main.go:141] libmachine: () Calling .GetVersion
	I0805 23:17:14.459611   33832 main.go:141] libmachine: Using API Version  1
	I0805 23:17:14.459634   33832 main.go:141] libmachine: () Calling .SetConfigRaw
	I0805 23:17:14.459990   33832 main.go:141] libmachine: () Calling .GetMachineName
	I0805 23:17:14.460215   33832 main.go:141] libmachine: (ha-044175-m03) Calling .GetState
	I0805 23:17:14.462124   33832 status.go:330] ha-044175-m03 host status = "Running" (err=<nil>)
	I0805 23:17:14.462141   33832 host.go:66] Checking if "ha-044175-m03" exists ...
	I0805 23:17:14.462502   33832 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0805 23:17:14.462550   33832 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0805 23:17:14.478221   33832 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37109
	I0805 23:17:14.478615   33832 main.go:141] libmachine: () Calling .GetVersion
	I0805 23:17:14.479108   33832 main.go:141] libmachine: Using API Version  1
	I0805 23:17:14.479126   33832 main.go:141] libmachine: () Calling .SetConfigRaw
	I0805 23:17:14.479397   33832 main.go:141] libmachine: () Calling .GetMachineName
	I0805 23:17:14.479582   33832 main.go:141] libmachine: (ha-044175-m03) Calling .GetIP
	I0805 23:17:14.482354   33832 main.go:141] libmachine: (ha-044175-m03) DBG | domain ha-044175-m03 has defined MAC address 52:54:00:f4:37:04 in network mk-ha-044175
	I0805 23:17:14.482835   33832 main.go:141] libmachine: (ha-044175-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f4:37:04", ip: ""} in network mk-ha-044175: {Iface:virbr1 ExpiryTime:2024-08-06 00:12:31 +0000 UTC Type:0 Mac:52:54:00:f4:37:04 Iaid: IPaddr:192.168.39.201 Prefix:24 Hostname:ha-044175-m03 Clientid:01:52:54:00:f4:37:04}
	I0805 23:17:14.482867   33832 main.go:141] libmachine: (ha-044175-m03) DBG | domain ha-044175-m03 has defined IP address 192.168.39.201 and MAC address 52:54:00:f4:37:04 in network mk-ha-044175
	I0805 23:17:14.483008   33832 host.go:66] Checking if "ha-044175-m03" exists ...
	I0805 23:17:14.483434   33832 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0805 23:17:14.483477   33832 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0805 23:17:14.498908   33832 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36449
	I0805 23:17:14.499446   33832 main.go:141] libmachine: () Calling .GetVersion
	I0805 23:17:14.499954   33832 main.go:141] libmachine: Using API Version  1
	I0805 23:17:14.499979   33832 main.go:141] libmachine: () Calling .SetConfigRaw
	I0805 23:17:14.500260   33832 main.go:141] libmachine: () Calling .GetMachineName
	I0805 23:17:14.500436   33832 main.go:141] libmachine: (ha-044175-m03) Calling .DriverName
	I0805 23:17:14.500602   33832 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0805 23:17:14.500623   33832 main.go:141] libmachine: (ha-044175-m03) Calling .GetSSHHostname
	I0805 23:17:14.503153   33832 main.go:141] libmachine: (ha-044175-m03) DBG | domain ha-044175-m03 has defined MAC address 52:54:00:f4:37:04 in network mk-ha-044175
	I0805 23:17:14.503599   33832 main.go:141] libmachine: (ha-044175-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f4:37:04", ip: ""} in network mk-ha-044175: {Iface:virbr1 ExpiryTime:2024-08-06 00:12:31 +0000 UTC Type:0 Mac:52:54:00:f4:37:04 Iaid: IPaddr:192.168.39.201 Prefix:24 Hostname:ha-044175-m03 Clientid:01:52:54:00:f4:37:04}
	I0805 23:17:14.503619   33832 main.go:141] libmachine: (ha-044175-m03) DBG | domain ha-044175-m03 has defined IP address 192.168.39.201 and MAC address 52:54:00:f4:37:04 in network mk-ha-044175
	I0805 23:17:14.503777   33832 main.go:141] libmachine: (ha-044175-m03) Calling .GetSSHPort
	I0805 23:17:14.503970   33832 main.go:141] libmachine: (ha-044175-m03) Calling .GetSSHKeyPath
	I0805 23:17:14.504120   33832 main.go:141] libmachine: (ha-044175-m03) Calling .GetSSHUsername
	I0805 23:17:14.504247   33832 sshutil.go:53] new ssh client: &{IP:192.168.39.201 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19373-9606/.minikube/machines/ha-044175-m03/id_rsa Username:docker}
	I0805 23:17:14.592766   33832 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0805 23:17:14.611489   33832 kubeconfig.go:125] found "ha-044175" server: "https://192.168.39.254:8443"
	I0805 23:17:14.611515   33832 api_server.go:166] Checking apiserver status ...
	I0805 23:17:14.611549   33832 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0805 23:17:14.627865   33832 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1566/cgroup
	W0805 23:17:14.638702   33832 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1566/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0805 23:17:14.638760   33832 ssh_runner.go:195] Run: ls
	I0805 23:17:14.645366   33832 api_server.go:253] Checking apiserver healthz at https://192.168.39.254:8443/healthz ...
	I0805 23:17:14.649542   33832 api_server.go:279] https://192.168.39.254:8443/healthz returned 200:
	ok
	I0805 23:17:14.649566   33832 status.go:422] ha-044175-m03 apiserver status = Running (err=<nil>)
	I0805 23:17:14.649574   33832 status.go:257] ha-044175-m03 status: &{Name:ha-044175-m03 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0805 23:17:14.649587   33832 status.go:255] checking status of ha-044175-m04 ...
	I0805 23:17:14.649871   33832 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0805 23:17:14.649910   33832 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0805 23:17:14.664695   33832 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36637
	I0805 23:17:14.665202   33832 main.go:141] libmachine: () Calling .GetVersion
	I0805 23:17:14.665686   33832 main.go:141] libmachine: Using API Version  1
	I0805 23:17:14.665706   33832 main.go:141] libmachine: () Calling .SetConfigRaw
	I0805 23:17:14.666070   33832 main.go:141] libmachine: () Calling .GetMachineName
	I0805 23:17:14.666293   33832 main.go:141] libmachine: (ha-044175-m04) Calling .GetState
	I0805 23:17:14.668033   33832 status.go:330] ha-044175-m04 host status = "Running" (err=<nil>)
	I0805 23:17:14.668051   33832 host.go:66] Checking if "ha-044175-m04" exists ...
	I0805 23:17:14.668430   33832 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0805 23:17:14.668503   33832 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0805 23:17:14.683181   33832 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43923
	I0805 23:17:14.683657   33832 main.go:141] libmachine: () Calling .GetVersion
	I0805 23:17:14.684103   33832 main.go:141] libmachine: Using API Version  1
	I0805 23:17:14.684125   33832 main.go:141] libmachine: () Calling .SetConfigRaw
	I0805 23:17:14.684392   33832 main.go:141] libmachine: () Calling .GetMachineName
	I0805 23:17:14.684597   33832 main.go:141] libmachine: (ha-044175-m04) Calling .GetIP
	I0805 23:17:14.687015   33832 main.go:141] libmachine: (ha-044175-m04) DBG | domain ha-044175-m04 has defined MAC address 52:54:00:e5:ba:4d in network mk-ha-044175
	I0805 23:17:14.687437   33832 main.go:141] libmachine: (ha-044175-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e5:ba:4d", ip: ""} in network mk-ha-044175: {Iface:virbr1 ExpiryTime:2024-08-06 00:13:59 +0000 UTC Type:0 Mac:52:54:00:e5:ba:4d Iaid: IPaddr:192.168.39.228 Prefix:24 Hostname:ha-044175-m04 Clientid:01:52:54:00:e5:ba:4d}
	I0805 23:17:14.687464   33832 main.go:141] libmachine: (ha-044175-m04) DBG | domain ha-044175-m04 has defined IP address 192.168.39.228 and MAC address 52:54:00:e5:ba:4d in network mk-ha-044175
	I0805 23:17:14.687605   33832 host.go:66] Checking if "ha-044175-m04" exists ...
	I0805 23:17:14.687899   33832 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0805 23:17:14.687936   33832 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0805 23:17:14.702234   33832 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44651
	I0805 23:17:14.702608   33832 main.go:141] libmachine: () Calling .GetVersion
	I0805 23:17:14.703070   33832 main.go:141] libmachine: Using API Version  1
	I0805 23:17:14.703095   33832 main.go:141] libmachine: () Calling .SetConfigRaw
	I0805 23:17:14.703402   33832 main.go:141] libmachine: () Calling .GetMachineName
	I0805 23:17:14.703582   33832 main.go:141] libmachine: (ha-044175-m04) Calling .DriverName
	I0805 23:17:14.703747   33832 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0805 23:17:14.703765   33832 main.go:141] libmachine: (ha-044175-m04) Calling .GetSSHHostname
	I0805 23:17:14.706236   33832 main.go:141] libmachine: (ha-044175-m04) DBG | domain ha-044175-m04 has defined MAC address 52:54:00:e5:ba:4d in network mk-ha-044175
	I0805 23:17:14.706605   33832 main.go:141] libmachine: (ha-044175-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e5:ba:4d", ip: ""} in network mk-ha-044175: {Iface:virbr1 ExpiryTime:2024-08-06 00:13:59 +0000 UTC Type:0 Mac:52:54:00:e5:ba:4d Iaid: IPaddr:192.168.39.228 Prefix:24 Hostname:ha-044175-m04 Clientid:01:52:54:00:e5:ba:4d}
	I0805 23:17:14.706626   33832 main.go:141] libmachine: (ha-044175-m04) DBG | domain ha-044175-m04 has defined IP address 192.168.39.228 and MAC address 52:54:00:e5:ba:4d in network mk-ha-044175
	I0805 23:17:14.706782   33832 main.go:141] libmachine: (ha-044175-m04) Calling .GetSSHPort
	I0805 23:17:14.706930   33832 main.go:141] libmachine: (ha-044175-m04) Calling .GetSSHKeyPath
	I0805 23:17:14.707075   33832 main.go:141] libmachine: (ha-044175-m04) Calling .GetSSHUsername
	I0805 23:17:14.707187   33832 sshutil.go:53] new ssh client: &{IP:192.168.39.228 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19373-9606/.minikube/machines/ha-044175-m04/id_rsa Username:docker}
	I0805 23:17:14.791993   33832 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0805 23:17:14.809758   33832 status.go:257] ha-044175-m04 status: &{Name:ha-044175-m04 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
ha_test.go:372: failed to run minikube status. args "out/minikube-linux-amd64 -p ha-044175 status -v=7 --alsologtostderr" : exit status 3
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p ha-044175 -n ha-044175
helpers_test.go:244: <<< TestMultiControlPlane/serial/StopSecondaryNode FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestMultiControlPlane/serial/StopSecondaryNode]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p ha-044175 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p ha-044175 logs -n 25: (1.41158502s)
helpers_test.go:252: TestMultiControlPlane/serial/StopSecondaryNode logs: 
-- stdout --
	
	==> Audit <==
	|---------|----------------------------------------------------------------------------------|-----------|---------|---------|---------------------|---------------------|
	| Command |                                       Args                                       |  Profile  |  User   | Version |     Start Time      |      End Time       |
	|---------|----------------------------------------------------------------------------------|-----------|---------|---------|---------------------|---------------------|
	| cp      | ha-044175 cp ha-044175-m03:/home/docker/cp-test.txt                              | ha-044175 | jenkins | v1.33.1 | 05 Aug 24 23:14 UTC | 05 Aug 24 23:14 UTC |
	|         | /tmp/TestMultiControlPlaneserialCopyFile3481107746/001/cp-test_ha-044175-m03.txt |           |         |         |                     |                     |
	| ssh     | ha-044175 ssh -n                                                                 | ha-044175 | jenkins | v1.33.1 | 05 Aug 24 23:14 UTC | 05 Aug 24 23:14 UTC |
	|         | ha-044175-m03 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| cp      | ha-044175 cp ha-044175-m03:/home/docker/cp-test.txt                              | ha-044175 | jenkins | v1.33.1 | 05 Aug 24 23:14 UTC | 05 Aug 24 23:14 UTC |
	|         | ha-044175:/home/docker/cp-test_ha-044175-m03_ha-044175.txt                       |           |         |         |                     |                     |
	| ssh     | ha-044175 ssh -n                                                                 | ha-044175 | jenkins | v1.33.1 | 05 Aug 24 23:14 UTC | 05 Aug 24 23:14 UTC |
	|         | ha-044175-m03 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-044175 ssh -n ha-044175 sudo cat                                              | ha-044175 | jenkins | v1.33.1 | 05 Aug 24 23:14 UTC | 05 Aug 24 23:14 UTC |
	|         | /home/docker/cp-test_ha-044175-m03_ha-044175.txt                                 |           |         |         |                     |                     |
	| cp      | ha-044175 cp ha-044175-m03:/home/docker/cp-test.txt                              | ha-044175 | jenkins | v1.33.1 | 05 Aug 24 23:14 UTC | 05 Aug 24 23:14 UTC |
	|         | ha-044175-m02:/home/docker/cp-test_ha-044175-m03_ha-044175-m02.txt               |           |         |         |                     |                     |
	| ssh     | ha-044175 ssh -n                                                                 | ha-044175 | jenkins | v1.33.1 | 05 Aug 24 23:14 UTC | 05 Aug 24 23:14 UTC |
	|         | ha-044175-m03 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-044175 ssh -n ha-044175-m02 sudo cat                                          | ha-044175 | jenkins | v1.33.1 | 05 Aug 24 23:14 UTC | 05 Aug 24 23:14 UTC |
	|         | /home/docker/cp-test_ha-044175-m03_ha-044175-m02.txt                             |           |         |         |                     |                     |
	| cp      | ha-044175 cp ha-044175-m03:/home/docker/cp-test.txt                              | ha-044175 | jenkins | v1.33.1 | 05 Aug 24 23:14 UTC | 05 Aug 24 23:14 UTC |
	|         | ha-044175-m04:/home/docker/cp-test_ha-044175-m03_ha-044175-m04.txt               |           |         |         |                     |                     |
	| ssh     | ha-044175 ssh -n                                                                 | ha-044175 | jenkins | v1.33.1 | 05 Aug 24 23:14 UTC | 05 Aug 24 23:14 UTC |
	|         | ha-044175-m03 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-044175 ssh -n ha-044175-m04 sudo cat                                          | ha-044175 | jenkins | v1.33.1 | 05 Aug 24 23:14 UTC | 05 Aug 24 23:14 UTC |
	|         | /home/docker/cp-test_ha-044175-m03_ha-044175-m04.txt                             |           |         |         |                     |                     |
	| cp      | ha-044175 cp testdata/cp-test.txt                                                | ha-044175 | jenkins | v1.33.1 | 05 Aug 24 23:14 UTC | 05 Aug 24 23:14 UTC |
	|         | ha-044175-m04:/home/docker/cp-test.txt                                           |           |         |         |                     |                     |
	| ssh     | ha-044175 ssh -n                                                                 | ha-044175 | jenkins | v1.33.1 | 05 Aug 24 23:14 UTC | 05 Aug 24 23:14 UTC |
	|         | ha-044175-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| cp      | ha-044175 cp ha-044175-m04:/home/docker/cp-test.txt                              | ha-044175 | jenkins | v1.33.1 | 05 Aug 24 23:14 UTC | 05 Aug 24 23:14 UTC |
	|         | /tmp/TestMultiControlPlaneserialCopyFile3481107746/001/cp-test_ha-044175-m04.txt |           |         |         |                     |                     |
	| ssh     | ha-044175 ssh -n                                                                 | ha-044175 | jenkins | v1.33.1 | 05 Aug 24 23:14 UTC | 05 Aug 24 23:14 UTC |
	|         | ha-044175-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| cp      | ha-044175 cp ha-044175-m04:/home/docker/cp-test.txt                              | ha-044175 | jenkins | v1.33.1 | 05 Aug 24 23:14 UTC | 05 Aug 24 23:14 UTC |
	|         | ha-044175:/home/docker/cp-test_ha-044175-m04_ha-044175.txt                       |           |         |         |                     |                     |
	| ssh     | ha-044175 ssh -n                                                                 | ha-044175 | jenkins | v1.33.1 | 05 Aug 24 23:14 UTC | 05 Aug 24 23:14 UTC |
	|         | ha-044175-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-044175 ssh -n ha-044175 sudo cat                                              | ha-044175 | jenkins | v1.33.1 | 05 Aug 24 23:14 UTC | 05 Aug 24 23:14 UTC |
	|         | /home/docker/cp-test_ha-044175-m04_ha-044175.txt                                 |           |         |         |                     |                     |
	| cp      | ha-044175 cp ha-044175-m04:/home/docker/cp-test.txt                              | ha-044175 | jenkins | v1.33.1 | 05 Aug 24 23:14 UTC | 05 Aug 24 23:14 UTC |
	|         | ha-044175-m02:/home/docker/cp-test_ha-044175-m04_ha-044175-m02.txt               |           |         |         |                     |                     |
	| ssh     | ha-044175 ssh -n                                                                 | ha-044175 | jenkins | v1.33.1 | 05 Aug 24 23:14 UTC | 05 Aug 24 23:14 UTC |
	|         | ha-044175-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-044175 ssh -n ha-044175-m02 sudo cat                                          | ha-044175 | jenkins | v1.33.1 | 05 Aug 24 23:14 UTC | 05 Aug 24 23:14 UTC |
	|         | /home/docker/cp-test_ha-044175-m04_ha-044175-m02.txt                             |           |         |         |                     |                     |
	| cp      | ha-044175 cp ha-044175-m04:/home/docker/cp-test.txt                              | ha-044175 | jenkins | v1.33.1 | 05 Aug 24 23:14 UTC | 05 Aug 24 23:14 UTC |
	|         | ha-044175-m03:/home/docker/cp-test_ha-044175-m04_ha-044175-m03.txt               |           |         |         |                     |                     |
	| ssh     | ha-044175 ssh -n                                                                 | ha-044175 | jenkins | v1.33.1 | 05 Aug 24 23:14 UTC | 05 Aug 24 23:14 UTC |
	|         | ha-044175-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-044175 ssh -n ha-044175-m03 sudo cat                                          | ha-044175 | jenkins | v1.33.1 | 05 Aug 24 23:14 UTC | 05 Aug 24 23:14 UTC |
	|         | /home/docker/cp-test_ha-044175-m04_ha-044175-m03.txt                             |           |         |         |                     |                     |
	| node    | ha-044175 node stop m02 -v=7                                                     | ha-044175 | jenkins | v1.33.1 | 05 Aug 24 23:14 UTC |                     |
	|         | --alsologtostderr                                                                |           |         |         |                     |                     |
	|---------|----------------------------------------------------------------------------------|-----------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/08/05 23:10:00
	Running on machine: ubuntu-20-agent-10
	Binary: Built with gc go1.22.5 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0805 23:10:00.718936   28839 out.go:291] Setting OutFile to fd 1 ...
	I0805 23:10:00.719071   28839 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0805 23:10:00.719082   28839 out.go:304] Setting ErrFile to fd 2...
	I0805 23:10:00.719089   28839 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0805 23:10:00.719264   28839 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19373-9606/.minikube/bin
	I0805 23:10:00.719821   28839 out.go:298] Setting JSON to false
	I0805 23:10:00.720707   28839 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-10","uptime":3147,"bootTime":1722896254,"procs":185,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1065-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0805 23:10:00.720765   28839 start.go:139] virtualization: kvm guest
	I0805 23:10:00.723090   28839 out.go:177] * [ha-044175] minikube v1.33.1 on Ubuntu 20.04 (kvm/amd64)
	I0805 23:10:00.724859   28839 notify.go:220] Checking for updates...
	I0805 23:10:00.724881   28839 out.go:177]   - MINIKUBE_LOCATION=19373
	I0805 23:10:00.726355   28839 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0805 23:10:00.727722   28839 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19373-9606/kubeconfig
	I0805 23:10:00.729247   28839 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19373-9606/.minikube
	I0805 23:10:00.730647   28839 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0805 23:10:00.731953   28839 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0805 23:10:00.733364   28839 driver.go:392] Setting default libvirt URI to qemu:///system
	I0805 23:10:00.768508   28839 out.go:177] * Using the kvm2 driver based on user configuration
	I0805 23:10:00.769796   28839 start.go:297] selected driver: kvm2
	I0805 23:10:00.769817   28839 start.go:901] validating driver "kvm2" against <nil>
	I0805 23:10:00.769828   28839 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0805 23:10:00.770541   28839 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0805 23:10:00.770614   28839 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/19373-9606/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0805 23:10:00.786160   28839 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.33.1
	I0805 23:10:00.786223   28839 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0805 23:10:00.786474   28839 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0805 23:10:00.786523   28839 cni.go:84] Creating CNI manager for ""
	I0805 23:10:00.786533   28839 cni.go:136] multinode detected (0 nodes found), recommending kindnet
	I0805 23:10:00.786537   28839 start_flags.go:319] Found "CNI" CNI - setting NetworkPlugin=cni
	I0805 23:10:00.786605   28839 start.go:340] cluster config:
	{Name:ha-044175 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.3 ClusterName:ha-044175 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRIS
ocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0
GPUs: AutoPauseInterval:1m0s}
	I0805 23:10:00.786703   28839 iso.go:125] acquiring lock: {Name:mk54a637ed625e04bb2b6adf973b61c976cd6d35 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0805 23:10:00.788752   28839 out.go:177] * Starting "ha-044175" primary control-plane node in "ha-044175" cluster
	I0805 23:10:00.790061   28839 preload.go:131] Checking if preload exists for k8s version v1.30.3 and runtime crio
	I0805 23:10:00.790106   28839 preload.go:146] Found local preload: /home/jenkins/minikube-integration/19373-9606/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-cri-o-overlay-amd64.tar.lz4
	I0805 23:10:00.790113   28839 cache.go:56] Caching tarball of preloaded images
	I0805 23:10:00.790183   28839 preload.go:172] Found /home/jenkins/minikube-integration/19373-9606/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0805 23:10:00.790193   28839 cache.go:59] Finished verifying existence of preloaded tar for v1.30.3 on crio
	I0805 23:10:00.790469   28839 profile.go:143] Saving config to /home/jenkins/minikube-integration/19373-9606/.minikube/profiles/ha-044175/config.json ...
	I0805 23:10:00.790488   28839 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19373-9606/.minikube/profiles/ha-044175/config.json: {Name:mk8c38569b7ea25c26897d16a4c42d0fe2104a00 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0805 23:10:00.790610   28839 start.go:360] acquireMachinesLock for ha-044175: {Name:mkd2ba511c39504598222edbf83078b718329186 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0805 23:10:00.790645   28839 start.go:364] duration metric: took 22.585µs to acquireMachinesLock for "ha-044175"
	I0805 23:10:00.790660   28839 start.go:93] Provisioning new machine with config: &{Name:ha-044175 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19339/minikube-v1.33.1-1722248113-19339-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kubernete
sVersion:v1.30.3 ClusterName:ha-044175 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:
9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0805 23:10:00.790713   28839 start.go:125] createHost starting for "" (driver="kvm2")
	I0805 23:10:00.793461   28839 out.go:204] * Creating kvm2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0805 23:10:00.793604   28839 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0805 23:10:00.793643   28839 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0805 23:10:00.807872   28839 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44653
	I0805 23:10:00.808276   28839 main.go:141] libmachine: () Calling .GetVersion
	I0805 23:10:00.808860   28839 main.go:141] libmachine: Using API Version  1
	I0805 23:10:00.808885   28839 main.go:141] libmachine: () Calling .SetConfigRaw
	I0805 23:10:00.809199   28839 main.go:141] libmachine: () Calling .GetMachineName
	I0805 23:10:00.809389   28839 main.go:141] libmachine: (ha-044175) Calling .GetMachineName
	I0805 23:10:00.809553   28839 main.go:141] libmachine: (ha-044175) Calling .DriverName
	I0805 23:10:00.809686   28839 start.go:159] libmachine.API.Create for "ha-044175" (driver="kvm2")
	I0805 23:10:00.809713   28839 client.go:168] LocalClient.Create starting
	I0805 23:10:00.809747   28839 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/19373-9606/.minikube/certs/ca.pem
	I0805 23:10:00.809793   28839 main.go:141] libmachine: Decoding PEM data...
	I0805 23:10:00.809818   28839 main.go:141] libmachine: Parsing certificate...
	I0805 23:10:00.809891   28839 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/19373-9606/.minikube/certs/cert.pem
	I0805 23:10:00.809919   28839 main.go:141] libmachine: Decoding PEM data...
	I0805 23:10:00.809938   28839 main.go:141] libmachine: Parsing certificate...
	I0805 23:10:00.809964   28839 main.go:141] libmachine: Running pre-create checks...
	I0805 23:10:00.809977   28839 main.go:141] libmachine: (ha-044175) Calling .PreCreateCheck
	I0805 23:10:00.810289   28839 main.go:141] libmachine: (ha-044175) Calling .GetConfigRaw
	I0805 23:10:00.810647   28839 main.go:141] libmachine: Creating machine...
	I0805 23:10:00.810661   28839 main.go:141] libmachine: (ha-044175) Calling .Create
	I0805 23:10:00.810782   28839 main.go:141] libmachine: (ha-044175) Creating KVM machine...
	I0805 23:10:00.811931   28839 main.go:141] libmachine: (ha-044175) DBG | found existing default KVM network
	I0805 23:10:00.812569   28839 main.go:141] libmachine: (ha-044175) DBG | I0805 23:10:00.812433   28863 network.go:206] using free private subnet 192.168.39.0/24: &{IP:192.168.39.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.39.0/24 Gateway:192.168.39.1 ClientMin:192.168.39.2 ClientMax:192.168.39.254 Broadcast:192.168.39.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc000015ac0}
	I0805 23:10:00.812588   28839 main.go:141] libmachine: (ha-044175) DBG | created network xml: 
	I0805 23:10:00.812600   28839 main.go:141] libmachine: (ha-044175) DBG | <network>
	I0805 23:10:00.812607   28839 main.go:141] libmachine: (ha-044175) DBG |   <name>mk-ha-044175</name>
	I0805 23:10:00.812617   28839 main.go:141] libmachine: (ha-044175) DBG |   <dns enable='no'/>
	I0805 23:10:00.812632   28839 main.go:141] libmachine: (ha-044175) DBG |   
	I0805 23:10:00.812671   28839 main.go:141] libmachine: (ha-044175) DBG |   <ip address='192.168.39.1' netmask='255.255.255.0'>
	I0805 23:10:00.812692   28839 main.go:141] libmachine: (ha-044175) DBG |     <dhcp>
	I0805 23:10:00.812704   28839 main.go:141] libmachine: (ha-044175) DBG |       <range start='192.168.39.2' end='192.168.39.253'/>
	I0805 23:10:00.812725   28839 main.go:141] libmachine: (ha-044175) DBG |     </dhcp>
	I0805 23:10:00.812748   28839 main.go:141] libmachine: (ha-044175) DBG |   </ip>
	I0805 23:10:00.812764   28839 main.go:141] libmachine: (ha-044175) DBG |   
	I0805 23:10:00.812773   28839 main.go:141] libmachine: (ha-044175) DBG | </network>
	I0805 23:10:00.812777   28839 main.go:141] libmachine: (ha-044175) DBG | 
	I0805 23:10:00.817725   28839 main.go:141] libmachine: (ha-044175) DBG | trying to create private KVM network mk-ha-044175 192.168.39.0/24...
	I0805 23:10:00.882500   28839 main.go:141] libmachine: (ha-044175) DBG | private KVM network mk-ha-044175 192.168.39.0/24 created
	I0805 23:10:00.882533   28839 main.go:141] libmachine: (ha-044175) DBG | I0805 23:10:00.882456   28863 common.go:145] Making disk image using store path: /home/jenkins/minikube-integration/19373-9606/.minikube
	I0805 23:10:00.882547   28839 main.go:141] libmachine: (ha-044175) Setting up store path in /home/jenkins/minikube-integration/19373-9606/.minikube/machines/ha-044175 ...
	I0805 23:10:00.882567   28839 main.go:141] libmachine: (ha-044175) Building disk image from file:///home/jenkins/minikube-integration/19373-9606/.minikube/cache/iso/amd64/minikube-v1.33.1-1722248113-19339-amd64.iso
	I0805 23:10:00.882636   28839 main.go:141] libmachine: (ha-044175) Downloading /home/jenkins/minikube-integration/19373-9606/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/19373-9606/.minikube/cache/iso/amd64/minikube-v1.33.1-1722248113-19339-amd64.iso...
	I0805 23:10:01.119900   28839 main.go:141] libmachine: (ha-044175) DBG | I0805 23:10:01.119732   28863 common.go:152] Creating ssh key: /home/jenkins/minikube-integration/19373-9606/.minikube/machines/ha-044175/id_rsa...
	I0805 23:10:01.238103   28839 main.go:141] libmachine: (ha-044175) DBG | I0805 23:10:01.237978   28863 common.go:158] Creating raw disk image: /home/jenkins/minikube-integration/19373-9606/.minikube/machines/ha-044175/ha-044175.rawdisk...
	I0805 23:10:01.238131   28839 main.go:141] libmachine: (ha-044175) DBG | Writing magic tar header
	I0805 23:10:01.238142   28839 main.go:141] libmachine: (ha-044175) DBG | Writing SSH key tar header
	I0805 23:10:01.238149   28839 main.go:141] libmachine: (ha-044175) DBG | I0805 23:10:01.238092   28863 common.go:172] Fixing permissions on /home/jenkins/minikube-integration/19373-9606/.minikube/machines/ha-044175 ...
	I0805 23:10:01.238295   28839 main.go:141] libmachine: (ha-044175) Setting executable bit set on /home/jenkins/minikube-integration/19373-9606/.minikube/machines/ha-044175 (perms=drwx------)
	I0805 23:10:01.238326   28839 main.go:141] libmachine: (ha-044175) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19373-9606/.minikube/machines/ha-044175
	I0805 23:10:01.238337   28839 main.go:141] libmachine: (ha-044175) Setting executable bit set on /home/jenkins/minikube-integration/19373-9606/.minikube/machines (perms=drwxr-xr-x)
	I0805 23:10:01.238364   28839 main.go:141] libmachine: (ha-044175) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19373-9606/.minikube/machines
	I0805 23:10:01.238383   28839 main.go:141] libmachine: (ha-044175) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19373-9606/.minikube
	I0805 23:10:01.238397   28839 main.go:141] libmachine: (ha-044175) Setting executable bit set on /home/jenkins/minikube-integration/19373-9606/.minikube (perms=drwxr-xr-x)
	I0805 23:10:01.238410   28839 main.go:141] libmachine: (ha-044175) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19373-9606
	I0805 23:10:01.238434   28839 main.go:141] libmachine: (ha-044175) DBG | Checking permissions on dir: /home/jenkins/minikube-integration
	I0805 23:10:01.238450   28839 main.go:141] libmachine: (ha-044175) Setting executable bit set on /home/jenkins/minikube-integration/19373-9606 (perms=drwxrwxr-x)
	I0805 23:10:01.238458   28839 main.go:141] libmachine: (ha-044175) DBG | Checking permissions on dir: /home/jenkins
	I0805 23:10:01.238477   28839 main.go:141] libmachine: (ha-044175) DBG | Checking permissions on dir: /home
	I0805 23:10:01.238489   28839 main.go:141] libmachine: (ha-044175) Setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I0805 23:10:01.238496   28839 main.go:141] libmachine: (ha-044175) DBG | Skipping /home - not owner
	I0805 23:10:01.238508   28839 main.go:141] libmachine: (ha-044175) Setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I0805 23:10:01.238518   28839 main.go:141] libmachine: (ha-044175) Creating domain...
	I0805 23:10:01.239457   28839 main.go:141] libmachine: (ha-044175) define libvirt domain using xml: 
	I0805 23:10:01.239477   28839 main.go:141] libmachine: (ha-044175) <domain type='kvm'>
	I0805 23:10:01.239487   28839 main.go:141] libmachine: (ha-044175)   <name>ha-044175</name>
	I0805 23:10:01.239496   28839 main.go:141] libmachine: (ha-044175)   <memory unit='MiB'>2200</memory>
	I0805 23:10:01.239503   28839 main.go:141] libmachine: (ha-044175)   <vcpu>2</vcpu>
	I0805 23:10:01.239508   28839 main.go:141] libmachine: (ha-044175)   <features>
	I0805 23:10:01.239517   28839 main.go:141] libmachine: (ha-044175)     <acpi/>
	I0805 23:10:01.239521   28839 main.go:141] libmachine: (ha-044175)     <apic/>
	I0805 23:10:01.239528   28839 main.go:141] libmachine: (ha-044175)     <pae/>
	I0805 23:10:01.239543   28839 main.go:141] libmachine: (ha-044175)     
	I0805 23:10:01.239566   28839 main.go:141] libmachine: (ha-044175)   </features>
	I0805 23:10:01.239586   28839 main.go:141] libmachine: (ha-044175)   <cpu mode='host-passthrough'>
	I0805 23:10:01.239598   28839 main.go:141] libmachine: (ha-044175)   
	I0805 23:10:01.239605   28839 main.go:141] libmachine: (ha-044175)   </cpu>
	I0805 23:10:01.239615   28839 main.go:141] libmachine: (ha-044175)   <os>
	I0805 23:10:01.239622   28839 main.go:141] libmachine: (ha-044175)     <type>hvm</type>
	I0805 23:10:01.239633   28839 main.go:141] libmachine: (ha-044175)     <boot dev='cdrom'/>
	I0805 23:10:01.239643   28839 main.go:141] libmachine: (ha-044175)     <boot dev='hd'/>
	I0805 23:10:01.239665   28839 main.go:141] libmachine: (ha-044175)     <bootmenu enable='no'/>
	I0805 23:10:01.239677   28839 main.go:141] libmachine: (ha-044175)   </os>
	I0805 23:10:01.239683   28839 main.go:141] libmachine: (ha-044175)   <devices>
	I0805 23:10:01.239690   28839 main.go:141] libmachine: (ha-044175)     <disk type='file' device='cdrom'>
	I0805 23:10:01.239700   28839 main.go:141] libmachine: (ha-044175)       <source file='/home/jenkins/minikube-integration/19373-9606/.minikube/machines/ha-044175/boot2docker.iso'/>
	I0805 23:10:01.239705   28839 main.go:141] libmachine: (ha-044175)       <target dev='hdc' bus='scsi'/>
	I0805 23:10:01.239712   28839 main.go:141] libmachine: (ha-044175)       <readonly/>
	I0805 23:10:01.239717   28839 main.go:141] libmachine: (ha-044175)     </disk>
	I0805 23:10:01.239725   28839 main.go:141] libmachine: (ha-044175)     <disk type='file' device='disk'>
	I0805 23:10:01.239731   28839 main.go:141] libmachine: (ha-044175)       <driver name='qemu' type='raw' cache='default' io='threads' />
	I0805 23:10:01.239741   28839 main.go:141] libmachine: (ha-044175)       <source file='/home/jenkins/minikube-integration/19373-9606/.minikube/machines/ha-044175/ha-044175.rawdisk'/>
	I0805 23:10:01.239748   28839 main.go:141] libmachine: (ha-044175)       <target dev='hda' bus='virtio'/>
	I0805 23:10:01.239753   28839 main.go:141] libmachine: (ha-044175)     </disk>
	I0805 23:10:01.239760   28839 main.go:141] libmachine: (ha-044175)     <interface type='network'>
	I0805 23:10:01.239765   28839 main.go:141] libmachine: (ha-044175)       <source network='mk-ha-044175'/>
	I0805 23:10:01.239772   28839 main.go:141] libmachine: (ha-044175)       <model type='virtio'/>
	I0805 23:10:01.239790   28839 main.go:141] libmachine: (ha-044175)     </interface>
	I0805 23:10:01.239808   28839 main.go:141] libmachine: (ha-044175)     <interface type='network'>
	I0805 23:10:01.239815   28839 main.go:141] libmachine: (ha-044175)       <source network='default'/>
	I0805 23:10:01.239824   28839 main.go:141] libmachine: (ha-044175)       <model type='virtio'/>
	I0805 23:10:01.239832   28839 main.go:141] libmachine: (ha-044175)     </interface>
	I0805 23:10:01.239837   28839 main.go:141] libmachine: (ha-044175)     <serial type='pty'>
	I0805 23:10:01.239844   28839 main.go:141] libmachine: (ha-044175)       <target port='0'/>
	I0805 23:10:01.239848   28839 main.go:141] libmachine: (ha-044175)     </serial>
	I0805 23:10:01.239853   28839 main.go:141] libmachine: (ha-044175)     <console type='pty'>
	I0805 23:10:01.239858   28839 main.go:141] libmachine: (ha-044175)       <target type='serial' port='0'/>
	I0805 23:10:01.239871   28839 main.go:141] libmachine: (ha-044175)     </console>
	I0805 23:10:01.239878   28839 main.go:141] libmachine: (ha-044175)     <rng model='virtio'>
	I0805 23:10:01.239884   28839 main.go:141] libmachine: (ha-044175)       <backend model='random'>/dev/random</backend>
	I0805 23:10:01.239890   28839 main.go:141] libmachine: (ha-044175)     </rng>
	I0805 23:10:01.239895   28839 main.go:141] libmachine: (ha-044175)     
	I0805 23:10:01.239901   28839 main.go:141] libmachine: (ha-044175)     
	I0805 23:10:01.239907   28839 main.go:141] libmachine: (ha-044175)   </devices>
	I0805 23:10:01.239919   28839 main.go:141] libmachine: (ha-044175) </domain>
	I0805 23:10:01.239929   28839 main.go:141] libmachine: (ha-044175) 
	I0805 23:10:01.244433   28839 main.go:141] libmachine: (ha-044175) DBG | domain ha-044175 has defined MAC address 52:54:00:f9:9f:76 in network default
	I0805 23:10:01.245052   28839 main.go:141] libmachine: (ha-044175) Ensuring networks are active...
	I0805 23:10:01.245083   28839 main.go:141] libmachine: (ha-044175) DBG | domain ha-044175 has defined MAC address 52:54:00:d0:5f:e4 in network mk-ha-044175
	I0805 23:10:01.245895   28839 main.go:141] libmachine: (ha-044175) Ensuring network default is active
	I0805 23:10:01.246314   28839 main.go:141] libmachine: (ha-044175) Ensuring network mk-ha-044175 is active
	I0805 23:10:01.246952   28839 main.go:141] libmachine: (ha-044175) Getting domain xml...
	I0805 23:10:01.247686   28839 main.go:141] libmachine: (ha-044175) Creating domain...
	I0805 23:10:02.446914   28839 main.go:141] libmachine: (ha-044175) Waiting to get IP...
	I0805 23:10:02.447670   28839 main.go:141] libmachine: (ha-044175) DBG | domain ha-044175 has defined MAC address 52:54:00:d0:5f:e4 in network mk-ha-044175
	I0805 23:10:02.448142   28839 main.go:141] libmachine: (ha-044175) DBG | unable to find current IP address of domain ha-044175 in network mk-ha-044175
	I0805 23:10:02.448213   28839 main.go:141] libmachine: (ha-044175) DBG | I0805 23:10:02.448124   28863 retry.go:31] will retry after 191.25034ms: waiting for machine to come up
	I0805 23:10:02.640708   28839 main.go:141] libmachine: (ha-044175) DBG | domain ha-044175 has defined MAC address 52:54:00:d0:5f:e4 in network mk-ha-044175
	I0805 23:10:02.641197   28839 main.go:141] libmachine: (ha-044175) DBG | unable to find current IP address of domain ha-044175 in network mk-ha-044175
	I0805 23:10:02.641237   28839 main.go:141] libmachine: (ha-044175) DBG | I0805 23:10:02.641141   28863 retry.go:31] will retry after 358.499245ms: waiting for machine to come up
	I0805 23:10:03.004458   28839 main.go:141] libmachine: (ha-044175) DBG | domain ha-044175 has defined MAC address 52:54:00:d0:5f:e4 in network mk-ha-044175
	I0805 23:10:03.004821   28839 main.go:141] libmachine: (ha-044175) DBG | unable to find current IP address of domain ha-044175 in network mk-ha-044175
	I0805 23:10:03.004846   28839 main.go:141] libmachine: (ha-044175) DBG | I0805 23:10:03.004783   28863 retry.go:31] will retry after 364.580201ms: waiting for machine to come up
	I0805 23:10:03.371523   28839 main.go:141] libmachine: (ha-044175) DBG | domain ha-044175 has defined MAC address 52:54:00:d0:5f:e4 in network mk-ha-044175
	I0805 23:10:03.371897   28839 main.go:141] libmachine: (ha-044175) DBG | unable to find current IP address of domain ha-044175 in network mk-ha-044175
	I0805 23:10:03.371917   28839 main.go:141] libmachine: (ha-044175) DBG | I0805 23:10:03.371855   28863 retry.go:31] will retry after 419.904223ms: waiting for machine to come up
	I0805 23:10:03.793500   28839 main.go:141] libmachine: (ha-044175) DBG | domain ha-044175 has defined MAC address 52:54:00:d0:5f:e4 in network mk-ha-044175
	I0805 23:10:03.793884   28839 main.go:141] libmachine: (ha-044175) DBG | unable to find current IP address of domain ha-044175 in network mk-ha-044175
	I0805 23:10:03.793911   28839 main.go:141] libmachine: (ha-044175) DBG | I0805 23:10:03.793826   28863 retry.go:31] will retry after 491.37058ms: waiting for machine to come up
	I0805 23:10:04.286536   28839 main.go:141] libmachine: (ha-044175) DBG | domain ha-044175 has defined MAC address 52:54:00:d0:5f:e4 in network mk-ha-044175
	I0805 23:10:04.286776   28839 main.go:141] libmachine: (ha-044175) DBG | unable to find current IP address of domain ha-044175 in network mk-ha-044175
	I0805 23:10:04.286797   28839 main.go:141] libmachine: (ha-044175) DBG | I0805 23:10:04.286748   28863 retry.go:31] will retry after 888.681799ms: waiting for machine to come up
	I0805 23:10:05.176785   28839 main.go:141] libmachine: (ha-044175) DBG | domain ha-044175 has defined MAC address 52:54:00:d0:5f:e4 in network mk-ha-044175
	I0805 23:10:05.177203   28839 main.go:141] libmachine: (ha-044175) DBG | unable to find current IP address of domain ha-044175 in network mk-ha-044175
	I0805 23:10:05.177246   28839 main.go:141] libmachine: (ha-044175) DBG | I0805 23:10:05.177143   28863 retry.go:31] will retry after 1.004077925s: waiting for machine to come up
	I0805 23:10:06.183184   28839 main.go:141] libmachine: (ha-044175) DBG | domain ha-044175 has defined MAC address 52:54:00:d0:5f:e4 in network mk-ha-044175
	I0805 23:10:06.183601   28839 main.go:141] libmachine: (ha-044175) DBG | unable to find current IP address of domain ha-044175 in network mk-ha-044175
	I0805 23:10:06.183634   28839 main.go:141] libmachine: (ha-044175) DBG | I0805 23:10:06.183560   28863 retry.go:31] will retry after 904.086074ms: waiting for machine to come up
	I0805 23:10:07.089719   28839 main.go:141] libmachine: (ha-044175) DBG | domain ha-044175 has defined MAC address 52:54:00:d0:5f:e4 in network mk-ha-044175
	I0805 23:10:07.090237   28839 main.go:141] libmachine: (ha-044175) DBG | unable to find current IP address of domain ha-044175 in network mk-ha-044175
	I0805 23:10:07.090302   28839 main.go:141] libmachine: (ha-044175) DBG | I0805 23:10:07.090183   28863 retry.go:31] will retry after 1.512955902s: waiting for machine to come up
	I0805 23:10:08.605148   28839 main.go:141] libmachine: (ha-044175) DBG | domain ha-044175 has defined MAC address 52:54:00:d0:5f:e4 in network mk-ha-044175
	I0805 23:10:08.605542   28839 main.go:141] libmachine: (ha-044175) DBG | unable to find current IP address of domain ha-044175 in network mk-ha-044175
	I0805 23:10:08.605567   28839 main.go:141] libmachine: (ha-044175) DBG | I0805 23:10:08.605496   28863 retry.go:31] will retry after 2.282337689s: waiting for machine to come up
	I0805 23:10:10.890002   28839 main.go:141] libmachine: (ha-044175) DBG | domain ha-044175 has defined MAC address 52:54:00:d0:5f:e4 in network mk-ha-044175
	I0805 23:10:10.890445   28839 main.go:141] libmachine: (ha-044175) DBG | unable to find current IP address of domain ha-044175 in network mk-ha-044175
	I0805 23:10:10.890465   28839 main.go:141] libmachine: (ha-044175) DBG | I0805 23:10:10.890401   28863 retry.go:31] will retry after 2.554606146s: waiting for machine to come up
	I0805 23:10:13.448689   28839 main.go:141] libmachine: (ha-044175) DBG | domain ha-044175 has defined MAC address 52:54:00:d0:5f:e4 in network mk-ha-044175
	I0805 23:10:13.449556   28839 main.go:141] libmachine: (ha-044175) DBG | unable to find current IP address of domain ha-044175 in network mk-ha-044175
	I0805 23:10:13.449596   28839 main.go:141] libmachine: (ha-044175) DBG | I0805 23:10:13.449510   28863 retry.go:31] will retry after 2.866219855s: waiting for machine to come up
	I0805 23:10:16.316858   28839 main.go:141] libmachine: (ha-044175) DBG | domain ha-044175 has defined MAC address 52:54:00:d0:5f:e4 in network mk-ha-044175
	I0805 23:10:16.317305   28839 main.go:141] libmachine: (ha-044175) DBG | unable to find current IP address of domain ha-044175 in network mk-ha-044175
	I0805 23:10:16.317323   28839 main.go:141] libmachine: (ha-044175) DBG | I0805 23:10:16.317274   28863 retry.go:31] will retry after 3.484103482s: waiting for machine to come up
	I0805 23:10:19.805811   28839 main.go:141] libmachine: (ha-044175) DBG | domain ha-044175 has defined MAC address 52:54:00:d0:5f:e4 in network mk-ha-044175
	I0805 23:10:19.806296   28839 main.go:141] libmachine: (ha-044175) DBG | unable to find current IP address of domain ha-044175 in network mk-ha-044175
	I0805 23:10:19.806325   28839 main.go:141] libmachine: (ha-044175) DBG | I0805 23:10:19.806243   28863 retry.go:31] will retry after 5.133269507s: waiting for machine to come up
	I0805 23:10:24.944435   28839 main.go:141] libmachine: (ha-044175) DBG | domain ha-044175 has defined MAC address 52:54:00:d0:5f:e4 in network mk-ha-044175
	I0805 23:10:24.944843   28839 main.go:141] libmachine: (ha-044175) Found IP for machine: 192.168.39.57
	I0805 23:10:24.944880   28839 main.go:141] libmachine: (ha-044175) Reserving static IP address...
	I0805 23:10:24.944896   28839 main.go:141] libmachine: (ha-044175) DBG | domain ha-044175 has current primary IP address 192.168.39.57 and MAC address 52:54:00:d0:5f:e4 in network mk-ha-044175
	I0805 23:10:24.945267   28839 main.go:141] libmachine: (ha-044175) DBG | unable to find host DHCP lease matching {name: "ha-044175", mac: "52:54:00:d0:5f:e4", ip: "192.168.39.57"} in network mk-ha-044175
	I0805 23:10:25.016183   28839 main.go:141] libmachine: (ha-044175) DBG | Getting to WaitForSSH function...
	I0805 23:10:25.016214   28839 main.go:141] libmachine: (ha-044175) Reserved static IP address: 192.168.39.57
	I0805 23:10:25.016226   28839 main.go:141] libmachine: (ha-044175) Waiting for SSH to be available...
	I0805 23:10:25.019000   28839 main.go:141] libmachine: (ha-044175) DBG | domain ha-044175 has defined MAC address 52:54:00:d0:5f:e4 in network mk-ha-044175
	I0805 23:10:25.019572   28839 main.go:141] libmachine: (ha-044175) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d0:5f:e4", ip: ""} in network mk-ha-044175: {Iface:virbr1 ExpiryTime:2024-08-06 00:10:15 +0000 UTC Type:0 Mac:52:54:00:d0:5f:e4 Iaid: IPaddr:192.168.39.57 Prefix:24 Hostname:minikube Clientid:01:52:54:00:d0:5f:e4}
	I0805 23:10:25.019599   28839 main.go:141] libmachine: (ha-044175) DBG | domain ha-044175 has defined IP address 192.168.39.57 and MAC address 52:54:00:d0:5f:e4 in network mk-ha-044175
	I0805 23:10:25.019766   28839 main.go:141] libmachine: (ha-044175) DBG | Using SSH client type: external
	I0805 23:10:25.019793   28839 main.go:141] libmachine: (ha-044175) DBG | Using SSH private key: /home/jenkins/minikube-integration/19373-9606/.minikube/machines/ha-044175/id_rsa (-rw-------)
	I0805 23:10:25.019832   28839 main.go:141] libmachine: (ha-044175) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.57 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19373-9606/.minikube/machines/ha-044175/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0805 23:10:25.019845   28839 main.go:141] libmachine: (ha-044175) DBG | About to run SSH command:
	I0805 23:10:25.019859   28839 main.go:141] libmachine: (ha-044175) DBG | exit 0
	I0805 23:10:25.143315   28839 main.go:141] libmachine: (ha-044175) DBG | SSH cmd err, output: <nil>: 
	I0805 23:10:25.143539   28839 main.go:141] libmachine: (ha-044175) KVM machine creation complete!
	I0805 23:10:25.143959   28839 main.go:141] libmachine: (ha-044175) Calling .GetConfigRaw
	I0805 23:10:25.144482   28839 main.go:141] libmachine: (ha-044175) Calling .DriverName
	I0805 23:10:25.144705   28839 main.go:141] libmachine: (ha-044175) Calling .DriverName
	I0805 23:10:25.144885   28839 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I0805 23:10:25.144901   28839 main.go:141] libmachine: (ha-044175) Calling .GetState
	I0805 23:10:25.146441   28839 main.go:141] libmachine: Detecting operating system of created instance...
	I0805 23:10:25.146455   28839 main.go:141] libmachine: Waiting for SSH to be available...
	I0805 23:10:25.146461   28839 main.go:141] libmachine: Getting to WaitForSSH function...
	I0805 23:10:25.146467   28839 main.go:141] libmachine: (ha-044175) Calling .GetSSHHostname
	I0805 23:10:25.148554   28839 main.go:141] libmachine: (ha-044175) DBG | domain ha-044175 has defined MAC address 52:54:00:d0:5f:e4 in network mk-ha-044175
	I0805 23:10:25.148915   28839 main.go:141] libmachine: (ha-044175) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d0:5f:e4", ip: ""} in network mk-ha-044175: {Iface:virbr1 ExpiryTime:2024-08-06 00:10:15 +0000 UTC Type:0 Mac:52:54:00:d0:5f:e4 Iaid: IPaddr:192.168.39.57 Prefix:24 Hostname:ha-044175 Clientid:01:52:54:00:d0:5f:e4}
	I0805 23:10:25.148929   28839 main.go:141] libmachine: (ha-044175) DBG | domain ha-044175 has defined IP address 192.168.39.57 and MAC address 52:54:00:d0:5f:e4 in network mk-ha-044175
	I0805 23:10:25.149036   28839 main.go:141] libmachine: (ha-044175) Calling .GetSSHPort
	I0805 23:10:25.149207   28839 main.go:141] libmachine: (ha-044175) Calling .GetSSHKeyPath
	I0805 23:10:25.149378   28839 main.go:141] libmachine: (ha-044175) Calling .GetSSHKeyPath
	I0805 23:10:25.149585   28839 main.go:141] libmachine: (ha-044175) Calling .GetSSHUsername
	I0805 23:10:25.149764   28839 main.go:141] libmachine: Using SSH client type: native
	I0805 23:10:25.149951   28839 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.57 22 <nil> <nil>}
	I0805 23:10:25.149960   28839 main.go:141] libmachine: About to run SSH command:
	exit 0
	I0805 23:10:25.250816   28839 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0805 23:10:25.250836   28839 main.go:141] libmachine: Detecting the provisioner...
	I0805 23:10:25.250843   28839 main.go:141] libmachine: (ha-044175) Calling .GetSSHHostname
	I0805 23:10:25.253727   28839 main.go:141] libmachine: (ha-044175) DBG | domain ha-044175 has defined MAC address 52:54:00:d0:5f:e4 in network mk-ha-044175
	I0805 23:10:25.254273   28839 main.go:141] libmachine: (ha-044175) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d0:5f:e4", ip: ""} in network mk-ha-044175: {Iface:virbr1 ExpiryTime:2024-08-06 00:10:15 +0000 UTC Type:0 Mac:52:54:00:d0:5f:e4 Iaid: IPaddr:192.168.39.57 Prefix:24 Hostname:ha-044175 Clientid:01:52:54:00:d0:5f:e4}
	I0805 23:10:25.254299   28839 main.go:141] libmachine: (ha-044175) DBG | domain ha-044175 has defined IP address 192.168.39.57 and MAC address 52:54:00:d0:5f:e4 in network mk-ha-044175
	I0805 23:10:25.254494   28839 main.go:141] libmachine: (ha-044175) Calling .GetSSHPort
	I0805 23:10:25.254653   28839 main.go:141] libmachine: (ha-044175) Calling .GetSSHKeyPath
	I0805 23:10:25.254784   28839 main.go:141] libmachine: (ha-044175) Calling .GetSSHKeyPath
	I0805 23:10:25.254940   28839 main.go:141] libmachine: (ha-044175) Calling .GetSSHUsername
	I0805 23:10:25.255135   28839 main.go:141] libmachine: Using SSH client type: native
	I0805 23:10:25.255318   28839 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.57 22 <nil> <nil>}
	I0805 23:10:25.255329   28839 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I0805 23:10:25.356273   28839 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2023.02.9-dirty
	ID=buildroot
	VERSION_ID=2023.02.9
	PRETTY_NAME="Buildroot 2023.02.9"
	
	I0805 23:10:25.356330   28839 main.go:141] libmachine: found compatible host: buildroot
	I0805 23:10:25.356337   28839 main.go:141] libmachine: Provisioning with buildroot...
	I0805 23:10:25.356346   28839 main.go:141] libmachine: (ha-044175) Calling .GetMachineName
	I0805 23:10:25.356584   28839 buildroot.go:166] provisioning hostname "ha-044175"
	I0805 23:10:25.356609   28839 main.go:141] libmachine: (ha-044175) Calling .GetMachineName
	I0805 23:10:25.356805   28839 main.go:141] libmachine: (ha-044175) Calling .GetSSHHostname
	I0805 23:10:25.359179   28839 main.go:141] libmachine: (ha-044175) DBG | domain ha-044175 has defined MAC address 52:54:00:d0:5f:e4 in network mk-ha-044175
	I0805 23:10:25.359576   28839 main.go:141] libmachine: (ha-044175) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d0:5f:e4", ip: ""} in network mk-ha-044175: {Iface:virbr1 ExpiryTime:2024-08-06 00:10:15 +0000 UTC Type:0 Mac:52:54:00:d0:5f:e4 Iaid: IPaddr:192.168.39.57 Prefix:24 Hostname:ha-044175 Clientid:01:52:54:00:d0:5f:e4}
	I0805 23:10:25.359608   28839 main.go:141] libmachine: (ha-044175) DBG | domain ha-044175 has defined IP address 192.168.39.57 and MAC address 52:54:00:d0:5f:e4 in network mk-ha-044175
	I0805 23:10:25.359785   28839 main.go:141] libmachine: (ha-044175) Calling .GetSSHPort
	I0805 23:10:25.359980   28839 main.go:141] libmachine: (ha-044175) Calling .GetSSHKeyPath
	I0805 23:10:25.360142   28839 main.go:141] libmachine: (ha-044175) Calling .GetSSHKeyPath
	I0805 23:10:25.360309   28839 main.go:141] libmachine: (ha-044175) Calling .GetSSHUsername
	I0805 23:10:25.360518   28839 main.go:141] libmachine: Using SSH client type: native
	I0805 23:10:25.360717   28839 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.57 22 <nil> <nil>}
	I0805 23:10:25.360730   28839 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-044175 && echo "ha-044175" | sudo tee /etc/hostname
	I0805 23:10:25.472972   28839 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-044175
	
	I0805 23:10:25.473002   28839 main.go:141] libmachine: (ha-044175) Calling .GetSSHHostname
	I0805 23:10:25.476342   28839 main.go:141] libmachine: (ha-044175) DBG | domain ha-044175 has defined MAC address 52:54:00:d0:5f:e4 in network mk-ha-044175
	I0805 23:10:25.476698   28839 main.go:141] libmachine: (ha-044175) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d0:5f:e4", ip: ""} in network mk-ha-044175: {Iface:virbr1 ExpiryTime:2024-08-06 00:10:15 +0000 UTC Type:0 Mac:52:54:00:d0:5f:e4 Iaid: IPaddr:192.168.39.57 Prefix:24 Hostname:ha-044175 Clientid:01:52:54:00:d0:5f:e4}
	I0805 23:10:25.476727   28839 main.go:141] libmachine: (ha-044175) DBG | domain ha-044175 has defined IP address 192.168.39.57 and MAC address 52:54:00:d0:5f:e4 in network mk-ha-044175
	I0805 23:10:25.476864   28839 main.go:141] libmachine: (ha-044175) Calling .GetSSHPort
	I0805 23:10:25.477054   28839 main.go:141] libmachine: (ha-044175) Calling .GetSSHKeyPath
	I0805 23:10:25.477222   28839 main.go:141] libmachine: (ha-044175) Calling .GetSSHKeyPath
	I0805 23:10:25.477369   28839 main.go:141] libmachine: (ha-044175) Calling .GetSSHUsername
	I0805 23:10:25.477485   28839 main.go:141] libmachine: Using SSH client type: native
	I0805 23:10:25.477637   28839 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.57 22 <nil> <nil>}
	I0805 23:10:25.477651   28839 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-044175' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-044175/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-044175' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0805 23:10:25.584203   28839 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0805 23:10:25.584230   28839 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19373-9606/.minikube CaCertPath:/home/jenkins/minikube-integration/19373-9606/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19373-9606/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19373-9606/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19373-9606/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19373-9606/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19373-9606/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19373-9606/.minikube}
	I0805 23:10:25.584275   28839 buildroot.go:174] setting up certificates
	I0805 23:10:25.584292   28839 provision.go:84] configureAuth start
	I0805 23:10:25.584303   28839 main.go:141] libmachine: (ha-044175) Calling .GetMachineName
	I0805 23:10:25.584581   28839 main.go:141] libmachine: (ha-044175) Calling .GetIP
	I0805 23:10:25.587629   28839 main.go:141] libmachine: (ha-044175) DBG | domain ha-044175 has defined MAC address 52:54:00:d0:5f:e4 in network mk-ha-044175
	I0805 23:10:25.587949   28839 main.go:141] libmachine: (ha-044175) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d0:5f:e4", ip: ""} in network mk-ha-044175: {Iface:virbr1 ExpiryTime:2024-08-06 00:10:15 +0000 UTC Type:0 Mac:52:54:00:d0:5f:e4 Iaid: IPaddr:192.168.39.57 Prefix:24 Hostname:ha-044175 Clientid:01:52:54:00:d0:5f:e4}
	I0805 23:10:25.587975   28839 main.go:141] libmachine: (ha-044175) DBG | domain ha-044175 has defined IP address 192.168.39.57 and MAC address 52:54:00:d0:5f:e4 in network mk-ha-044175
	I0805 23:10:25.588124   28839 main.go:141] libmachine: (ha-044175) Calling .GetSSHHostname
	I0805 23:10:25.590515   28839 main.go:141] libmachine: (ha-044175) DBG | domain ha-044175 has defined MAC address 52:54:00:d0:5f:e4 in network mk-ha-044175
	I0805 23:10:25.590885   28839 main.go:141] libmachine: (ha-044175) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d0:5f:e4", ip: ""} in network mk-ha-044175: {Iface:virbr1 ExpiryTime:2024-08-06 00:10:15 +0000 UTC Type:0 Mac:52:54:00:d0:5f:e4 Iaid: IPaddr:192.168.39.57 Prefix:24 Hostname:ha-044175 Clientid:01:52:54:00:d0:5f:e4}
	I0805 23:10:25.590916   28839 main.go:141] libmachine: (ha-044175) DBG | domain ha-044175 has defined IP address 192.168.39.57 and MAC address 52:54:00:d0:5f:e4 in network mk-ha-044175
	I0805 23:10:25.591034   28839 provision.go:143] copyHostCerts
	I0805 23:10:25.591089   28839 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19373-9606/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/19373-9606/.minikube/cert.pem
	I0805 23:10:25.591138   28839 exec_runner.go:144] found /home/jenkins/minikube-integration/19373-9606/.minikube/cert.pem, removing ...
	I0805 23:10:25.591146   28839 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19373-9606/.minikube/cert.pem
	I0805 23:10:25.591209   28839 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19373-9606/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19373-9606/.minikube/cert.pem (1123 bytes)
	I0805 23:10:25.591315   28839 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19373-9606/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/19373-9606/.minikube/key.pem
	I0805 23:10:25.591347   28839 exec_runner.go:144] found /home/jenkins/minikube-integration/19373-9606/.minikube/key.pem, removing ...
	I0805 23:10:25.591355   28839 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19373-9606/.minikube/key.pem
	I0805 23:10:25.591390   28839 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19373-9606/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19373-9606/.minikube/key.pem (1679 bytes)
	I0805 23:10:25.591461   28839 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19373-9606/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/19373-9606/.minikube/ca.pem
	I0805 23:10:25.591487   28839 exec_runner.go:144] found /home/jenkins/minikube-integration/19373-9606/.minikube/ca.pem, removing ...
	I0805 23:10:25.591496   28839 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19373-9606/.minikube/ca.pem
	I0805 23:10:25.591527   28839 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19373-9606/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19373-9606/.minikube/ca.pem (1082 bytes)
	I0805 23:10:25.591601   28839 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19373-9606/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19373-9606/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19373-9606/.minikube/certs/ca-key.pem org=jenkins.ha-044175 san=[127.0.0.1 192.168.39.57 ha-044175 localhost minikube]
	I0805 23:10:25.760201   28839 provision.go:177] copyRemoteCerts
	I0805 23:10:25.760257   28839 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0805 23:10:25.760278   28839 main.go:141] libmachine: (ha-044175) Calling .GetSSHHostname
	I0805 23:10:25.763102   28839 main.go:141] libmachine: (ha-044175) DBG | domain ha-044175 has defined MAC address 52:54:00:d0:5f:e4 in network mk-ha-044175
	I0805 23:10:25.763598   28839 main.go:141] libmachine: (ha-044175) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d0:5f:e4", ip: ""} in network mk-ha-044175: {Iface:virbr1 ExpiryTime:2024-08-06 00:10:15 +0000 UTC Type:0 Mac:52:54:00:d0:5f:e4 Iaid: IPaddr:192.168.39.57 Prefix:24 Hostname:ha-044175 Clientid:01:52:54:00:d0:5f:e4}
	I0805 23:10:25.763631   28839 main.go:141] libmachine: (ha-044175) DBG | domain ha-044175 has defined IP address 192.168.39.57 and MAC address 52:54:00:d0:5f:e4 in network mk-ha-044175
	I0805 23:10:25.763880   28839 main.go:141] libmachine: (ha-044175) Calling .GetSSHPort
	I0805 23:10:25.764062   28839 main.go:141] libmachine: (ha-044175) Calling .GetSSHKeyPath
	I0805 23:10:25.764219   28839 main.go:141] libmachine: (ha-044175) Calling .GetSSHUsername
	I0805 23:10:25.764418   28839 sshutil.go:53] new ssh client: &{IP:192.168.39.57 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19373-9606/.minikube/machines/ha-044175/id_rsa Username:docker}
	I0805 23:10:25.845623   28839 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19373-9606/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0805 23:10:25.845698   28839 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19373-9606/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0805 23:10:25.870727   28839 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19373-9606/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0805 23:10:25.870805   28839 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19373-9606/.minikube/machines/server.pem --> /etc/docker/server.pem (1200 bytes)
	I0805 23:10:25.896864   28839 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19373-9606/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0805 23:10:25.896954   28839 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19373-9606/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0805 23:10:25.921692   28839 provision.go:87] duration metric: took 337.38411ms to configureAuth
	I0805 23:10:25.921725   28839 buildroot.go:189] setting minikube options for container-runtime
	I0805 23:10:25.921953   28839 config.go:182] Loaded profile config "ha-044175": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0805 23:10:25.922062   28839 main.go:141] libmachine: (ha-044175) Calling .GetSSHHostname
	I0805 23:10:25.924817   28839 main.go:141] libmachine: (ha-044175) DBG | domain ha-044175 has defined MAC address 52:54:00:d0:5f:e4 in network mk-ha-044175
	I0805 23:10:25.925226   28839 main.go:141] libmachine: (ha-044175) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d0:5f:e4", ip: ""} in network mk-ha-044175: {Iface:virbr1 ExpiryTime:2024-08-06 00:10:15 +0000 UTC Type:0 Mac:52:54:00:d0:5f:e4 Iaid: IPaddr:192.168.39.57 Prefix:24 Hostname:ha-044175 Clientid:01:52:54:00:d0:5f:e4}
	I0805 23:10:25.925247   28839 main.go:141] libmachine: (ha-044175) DBG | domain ha-044175 has defined IP address 192.168.39.57 and MAC address 52:54:00:d0:5f:e4 in network mk-ha-044175
	I0805 23:10:25.925409   28839 main.go:141] libmachine: (ha-044175) Calling .GetSSHPort
	I0805 23:10:25.925595   28839 main.go:141] libmachine: (ha-044175) Calling .GetSSHKeyPath
	I0805 23:10:25.925801   28839 main.go:141] libmachine: (ha-044175) Calling .GetSSHKeyPath
	I0805 23:10:25.925957   28839 main.go:141] libmachine: (ha-044175) Calling .GetSSHUsername
	I0805 23:10:25.926139   28839 main.go:141] libmachine: Using SSH client type: native
	I0805 23:10:25.926290   28839 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.57 22 <nil> <nil>}
	I0805 23:10:25.926303   28839 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0805 23:10:26.213499   28839 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0805 23:10:26.213524   28839 main.go:141] libmachine: Checking connection to Docker...
	I0805 23:10:26.213555   28839 main.go:141] libmachine: (ha-044175) Calling .GetURL
	I0805 23:10:26.214928   28839 main.go:141] libmachine: (ha-044175) DBG | Using libvirt version 6000000
	I0805 23:10:26.217217   28839 main.go:141] libmachine: (ha-044175) DBG | domain ha-044175 has defined MAC address 52:54:00:d0:5f:e4 in network mk-ha-044175
	I0805 23:10:26.217551   28839 main.go:141] libmachine: (ha-044175) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d0:5f:e4", ip: ""} in network mk-ha-044175: {Iface:virbr1 ExpiryTime:2024-08-06 00:10:15 +0000 UTC Type:0 Mac:52:54:00:d0:5f:e4 Iaid: IPaddr:192.168.39.57 Prefix:24 Hostname:ha-044175 Clientid:01:52:54:00:d0:5f:e4}
	I0805 23:10:26.217574   28839 main.go:141] libmachine: (ha-044175) DBG | domain ha-044175 has defined IP address 192.168.39.57 and MAC address 52:54:00:d0:5f:e4 in network mk-ha-044175
	I0805 23:10:26.217740   28839 main.go:141] libmachine: Docker is up and running!
	I0805 23:10:26.217774   28839 main.go:141] libmachine: Reticulating splines...
	I0805 23:10:26.217782   28839 client.go:171] duration metric: took 25.40805915s to LocalClient.Create
	I0805 23:10:26.217809   28839 start.go:167] duration metric: took 25.408121999s to libmachine.API.Create "ha-044175"
	I0805 23:10:26.217820   28839 start.go:293] postStartSetup for "ha-044175" (driver="kvm2")
	I0805 23:10:26.217834   28839 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0805 23:10:26.217856   28839 main.go:141] libmachine: (ha-044175) Calling .DriverName
	I0805 23:10:26.218087   28839 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0805 23:10:26.218135   28839 main.go:141] libmachine: (ha-044175) Calling .GetSSHHostname
	I0805 23:10:26.220117   28839 main.go:141] libmachine: (ha-044175) DBG | domain ha-044175 has defined MAC address 52:54:00:d0:5f:e4 in network mk-ha-044175
	I0805 23:10:26.220430   28839 main.go:141] libmachine: (ha-044175) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d0:5f:e4", ip: ""} in network mk-ha-044175: {Iface:virbr1 ExpiryTime:2024-08-06 00:10:15 +0000 UTC Type:0 Mac:52:54:00:d0:5f:e4 Iaid: IPaddr:192.168.39.57 Prefix:24 Hostname:ha-044175 Clientid:01:52:54:00:d0:5f:e4}
	I0805 23:10:26.220452   28839 main.go:141] libmachine: (ha-044175) DBG | domain ha-044175 has defined IP address 192.168.39.57 and MAC address 52:54:00:d0:5f:e4 in network mk-ha-044175
	I0805 23:10:26.220567   28839 main.go:141] libmachine: (ha-044175) Calling .GetSSHPort
	I0805 23:10:26.220743   28839 main.go:141] libmachine: (ha-044175) Calling .GetSSHKeyPath
	I0805 23:10:26.220984   28839 main.go:141] libmachine: (ha-044175) Calling .GetSSHUsername
	I0805 23:10:26.221150   28839 sshutil.go:53] new ssh client: &{IP:192.168.39.57 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19373-9606/.minikube/machines/ha-044175/id_rsa Username:docker}
	I0805 23:10:26.302017   28839 ssh_runner.go:195] Run: cat /etc/os-release
	I0805 23:10:26.306495   28839 info.go:137] Remote host: Buildroot 2023.02.9
	I0805 23:10:26.306525   28839 filesync.go:126] Scanning /home/jenkins/minikube-integration/19373-9606/.minikube/addons for local assets ...
	I0805 23:10:26.306598   28839 filesync.go:126] Scanning /home/jenkins/minikube-integration/19373-9606/.minikube/files for local assets ...
	I0805 23:10:26.306688   28839 filesync.go:149] local asset: /home/jenkins/minikube-integration/19373-9606/.minikube/files/etc/ssl/certs/167922.pem -> 167922.pem in /etc/ssl/certs
	I0805 23:10:26.306700   28839 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19373-9606/.minikube/files/etc/ssl/certs/167922.pem -> /etc/ssl/certs/167922.pem
	I0805 23:10:26.306834   28839 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0805 23:10:26.316268   28839 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19373-9606/.minikube/files/etc/ssl/certs/167922.pem --> /etc/ssl/certs/167922.pem (1708 bytes)
	I0805 23:10:26.341081   28839 start.go:296] duration metric: took 123.248464ms for postStartSetup
	I0805 23:10:26.341131   28839 main.go:141] libmachine: (ha-044175) Calling .GetConfigRaw
	I0805 23:10:26.341711   28839 main.go:141] libmachine: (ha-044175) Calling .GetIP
	I0805 23:10:26.344242   28839 main.go:141] libmachine: (ha-044175) DBG | domain ha-044175 has defined MAC address 52:54:00:d0:5f:e4 in network mk-ha-044175
	I0805 23:10:26.344580   28839 main.go:141] libmachine: (ha-044175) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d0:5f:e4", ip: ""} in network mk-ha-044175: {Iface:virbr1 ExpiryTime:2024-08-06 00:10:15 +0000 UTC Type:0 Mac:52:54:00:d0:5f:e4 Iaid: IPaddr:192.168.39.57 Prefix:24 Hostname:ha-044175 Clientid:01:52:54:00:d0:5f:e4}
	I0805 23:10:26.344601   28839 main.go:141] libmachine: (ha-044175) DBG | domain ha-044175 has defined IP address 192.168.39.57 and MAC address 52:54:00:d0:5f:e4 in network mk-ha-044175
	I0805 23:10:26.344857   28839 profile.go:143] Saving config to /home/jenkins/minikube-integration/19373-9606/.minikube/profiles/ha-044175/config.json ...
	I0805 23:10:26.345045   28839 start.go:128] duration metric: took 25.554324128s to createHost
	I0805 23:10:26.345065   28839 main.go:141] libmachine: (ha-044175) Calling .GetSSHHostname
	I0805 23:10:26.347316   28839 main.go:141] libmachine: (ha-044175) DBG | domain ha-044175 has defined MAC address 52:54:00:d0:5f:e4 in network mk-ha-044175
	I0805 23:10:26.347742   28839 main.go:141] libmachine: (ha-044175) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d0:5f:e4", ip: ""} in network mk-ha-044175: {Iface:virbr1 ExpiryTime:2024-08-06 00:10:15 +0000 UTC Type:0 Mac:52:54:00:d0:5f:e4 Iaid: IPaddr:192.168.39.57 Prefix:24 Hostname:ha-044175 Clientid:01:52:54:00:d0:5f:e4}
	I0805 23:10:26.347773   28839 main.go:141] libmachine: (ha-044175) DBG | domain ha-044175 has defined IP address 192.168.39.57 and MAC address 52:54:00:d0:5f:e4 in network mk-ha-044175
	I0805 23:10:26.347926   28839 main.go:141] libmachine: (ha-044175) Calling .GetSSHPort
	I0805 23:10:26.348114   28839 main.go:141] libmachine: (ha-044175) Calling .GetSSHKeyPath
	I0805 23:10:26.348274   28839 main.go:141] libmachine: (ha-044175) Calling .GetSSHKeyPath
	I0805 23:10:26.348430   28839 main.go:141] libmachine: (ha-044175) Calling .GetSSHUsername
	I0805 23:10:26.348586   28839 main.go:141] libmachine: Using SSH client type: native
	I0805 23:10:26.348790   28839 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.57 22 <nil> <nil>}
	I0805 23:10:26.348845   28839 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0805 23:10:26.448259   28839 main.go:141] libmachine: SSH cmd err, output: <nil>: 1722899426.426191961
	
	I0805 23:10:26.448285   28839 fix.go:216] guest clock: 1722899426.426191961
	I0805 23:10:26.448293   28839 fix.go:229] Guest: 2024-08-05 23:10:26.426191961 +0000 UTC Remote: 2024-08-05 23:10:26.345055906 +0000 UTC m=+25.661044053 (delta=81.136055ms)
	I0805 23:10:26.448311   28839 fix.go:200] guest clock delta is within tolerance: 81.136055ms
	I0805 23:10:26.448316   28839 start.go:83] releasing machines lock for "ha-044175", held for 25.657662432s
	I0805 23:10:26.448332   28839 main.go:141] libmachine: (ha-044175) Calling .DriverName
	I0805 23:10:26.448607   28839 main.go:141] libmachine: (ha-044175) Calling .GetIP
	I0805 23:10:26.451550   28839 main.go:141] libmachine: (ha-044175) DBG | domain ha-044175 has defined MAC address 52:54:00:d0:5f:e4 in network mk-ha-044175
	I0805 23:10:26.451910   28839 main.go:141] libmachine: (ha-044175) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d0:5f:e4", ip: ""} in network mk-ha-044175: {Iface:virbr1 ExpiryTime:2024-08-06 00:10:15 +0000 UTC Type:0 Mac:52:54:00:d0:5f:e4 Iaid: IPaddr:192.168.39.57 Prefix:24 Hostname:ha-044175 Clientid:01:52:54:00:d0:5f:e4}
	I0805 23:10:26.451938   28839 main.go:141] libmachine: (ha-044175) DBG | domain ha-044175 has defined IP address 192.168.39.57 and MAC address 52:54:00:d0:5f:e4 in network mk-ha-044175
	I0805 23:10:26.452065   28839 main.go:141] libmachine: (ha-044175) Calling .DriverName
	I0805 23:10:26.452585   28839 main.go:141] libmachine: (ha-044175) Calling .DriverName
	I0805 23:10:26.452791   28839 main.go:141] libmachine: (ha-044175) Calling .DriverName
	I0805 23:10:26.452904   28839 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0805 23:10:26.452938   28839 main.go:141] libmachine: (ha-044175) Calling .GetSSHHostname
	I0805 23:10:26.453071   28839 ssh_runner.go:195] Run: cat /version.json
	I0805 23:10:26.453103   28839 main.go:141] libmachine: (ha-044175) Calling .GetSSHHostname
	I0805 23:10:26.455498   28839 main.go:141] libmachine: (ha-044175) DBG | domain ha-044175 has defined MAC address 52:54:00:d0:5f:e4 in network mk-ha-044175
	I0805 23:10:26.455823   28839 main.go:141] libmachine: (ha-044175) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d0:5f:e4", ip: ""} in network mk-ha-044175: {Iface:virbr1 ExpiryTime:2024-08-06 00:10:15 +0000 UTC Type:0 Mac:52:54:00:d0:5f:e4 Iaid: IPaddr:192.168.39.57 Prefix:24 Hostname:ha-044175 Clientid:01:52:54:00:d0:5f:e4}
	I0805 23:10:26.455850   28839 main.go:141] libmachine: (ha-044175) DBG | domain ha-044175 has defined IP address 192.168.39.57 and MAC address 52:54:00:d0:5f:e4 in network mk-ha-044175
	I0805 23:10:26.455869   28839 main.go:141] libmachine: (ha-044175) DBG | domain ha-044175 has defined MAC address 52:54:00:d0:5f:e4 in network mk-ha-044175
	I0805 23:10:26.456007   28839 main.go:141] libmachine: (ha-044175) Calling .GetSSHPort
	I0805 23:10:26.456262   28839 main.go:141] libmachine: (ha-044175) Calling .GetSSHKeyPath
	I0805 23:10:26.456307   28839 main.go:141] libmachine: (ha-044175) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d0:5f:e4", ip: ""} in network mk-ha-044175: {Iface:virbr1 ExpiryTime:2024-08-06 00:10:15 +0000 UTC Type:0 Mac:52:54:00:d0:5f:e4 Iaid: IPaddr:192.168.39.57 Prefix:24 Hostname:ha-044175 Clientid:01:52:54:00:d0:5f:e4}
	I0805 23:10:26.456327   28839 main.go:141] libmachine: (ha-044175) DBG | domain ha-044175 has defined IP address 192.168.39.57 and MAC address 52:54:00:d0:5f:e4 in network mk-ha-044175
	I0805 23:10:26.456417   28839 main.go:141] libmachine: (ha-044175) Calling .GetSSHUsername
	I0805 23:10:26.456486   28839 main.go:141] libmachine: (ha-044175) Calling .GetSSHPort
	I0805 23:10:26.456569   28839 sshutil.go:53] new ssh client: &{IP:192.168.39.57 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19373-9606/.minikube/machines/ha-044175/id_rsa Username:docker}
	I0805 23:10:26.456654   28839 main.go:141] libmachine: (ha-044175) Calling .GetSSHKeyPath
	I0805 23:10:26.456861   28839 main.go:141] libmachine: (ha-044175) Calling .GetSSHUsername
	I0805 23:10:26.457055   28839 sshutil.go:53] new ssh client: &{IP:192.168.39.57 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19373-9606/.minikube/machines/ha-044175/id_rsa Username:docker}
	I0805 23:10:26.532093   28839 ssh_runner.go:195] Run: systemctl --version
	I0805 23:10:26.552946   28839 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0805 23:10:26.717407   28839 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0805 23:10:26.723705   28839 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0805 23:10:26.723769   28839 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0805 23:10:26.740772   28839 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0805 23:10:26.740799   28839 start.go:495] detecting cgroup driver to use...
	I0805 23:10:26.740872   28839 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0805 23:10:26.757914   28839 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0805 23:10:26.771892   28839 docker.go:217] disabling cri-docker service (if available) ...
	I0805 23:10:26.771947   28839 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0805 23:10:26.786392   28839 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0805 23:10:26.800653   28839 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0805 23:10:26.912988   28839 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0805 23:10:27.052129   28839 docker.go:233] disabling docker service ...
	I0805 23:10:27.052196   28839 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0805 23:10:27.067392   28839 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0805 23:10:27.080774   28839 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0805 23:10:27.217830   28839 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0805 23:10:27.331931   28839 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0805 23:10:27.346720   28839 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0805 23:10:27.365742   28839 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I0805 23:10:27.365794   28839 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0805 23:10:27.377789   28839 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0805 23:10:27.377923   28839 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0805 23:10:27.390408   28839 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0805 23:10:27.401535   28839 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0805 23:10:27.412548   28839 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0805 23:10:27.423605   28839 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0805 23:10:27.434746   28839 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0805 23:10:27.452382   28839 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0805 23:10:27.463232   28839 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0805 23:10:27.472975   28839 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0805 23:10:27.473040   28839 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0805 23:10:27.487200   28839 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0805 23:10:27.497333   28839 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0805 23:10:27.605312   28839 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0805 23:10:27.745378   28839 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0805 23:10:27.745456   28839 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0805 23:10:27.750517   28839 start.go:563] Will wait 60s for crictl version
	I0805 23:10:27.750577   28839 ssh_runner.go:195] Run: which crictl
	I0805 23:10:27.754578   28839 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0805 23:10:27.790577   28839 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0805 23:10:27.790663   28839 ssh_runner.go:195] Run: crio --version
	I0805 23:10:27.819956   28839 ssh_runner.go:195] Run: crio --version
	I0805 23:10:27.850591   28839 out.go:177] * Preparing Kubernetes v1.30.3 on CRI-O 1.29.1 ...
	I0805 23:10:27.851744   28839 main.go:141] libmachine: (ha-044175) Calling .GetIP
	I0805 23:10:27.854702   28839 main.go:141] libmachine: (ha-044175) DBG | domain ha-044175 has defined MAC address 52:54:00:d0:5f:e4 in network mk-ha-044175
	I0805 23:10:27.855041   28839 main.go:141] libmachine: (ha-044175) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d0:5f:e4", ip: ""} in network mk-ha-044175: {Iface:virbr1 ExpiryTime:2024-08-06 00:10:15 +0000 UTC Type:0 Mac:52:54:00:d0:5f:e4 Iaid: IPaddr:192.168.39.57 Prefix:24 Hostname:ha-044175 Clientid:01:52:54:00:d0:5f:e4}
	I0805 23:10:27.855092   28839 main.go:141] libmachine: (ha-044175) DBG | domain ha-044175 has defined IP address 192.168.39.57 and MAC address 52:54:00:d0:5f:e4 in network mk-ha-044175
	I0805 23:10:27.855316   28839 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0805 23:10:27.859437   28839 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0805 23:10:27.872935   28839 kubeadm.go:883] updating cluster {Name:ha-044175 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19339/minikube-v1.33.1-1722248113-19339-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.3 Cl
usterName:ha-044175 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.57 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 Mo
untType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0805 23:10:27.873039   28839 preload.go:131] Checking if preload exists for k8s version v1.30.3 and runtime crio
	I0805 23:10:27.873108   28839 ssh_runner.go:195] Run: sudo crictl images --output json
	I0805 23:10:27.904355   28839 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.30.3". assuming images are not preloaded.
	I0805 23:10:27.904422   28839 ssh_runner.go:195] Run: which lz4
	I0805 23:10:27.908408   28839 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19373-9606/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-cri-o-overlay-amd64.tar.lz4 -> /preloaded.tar.lz4
	I0805 23:10:27.908486   28839 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I0805 23:10:27.912616   28839 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0805 23:10:27.912637   28839 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19373-9606/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (406200976 bytes)
	I0805 23:10:29.394828   28839 crio.go:462] duration metric: took 1.48636381s to copy over tarball
	I0805 23:10:29.394918   28839 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0805 23:10:31.572647   28839 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.177703625s)
	I0805 23:10:31.572670   28839 crio.go:469] duration metric: took 2.177818197s to extract the tarball
	I0805 23:10:31.572679   28839 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0805 23:10:31.610325   28839 ssh_runner.go:195] Run: sudo crictl images --output json
	I0805 23:10:31.658573   28839 crio.go:514] all images are preloaded for cri-o runtime.
	I0805 23:10:31.658597   28839 cache_images.go:84] Images are preloaded, skipping loading
	I0805 23:10:31.658608   28839 kubeadm.go:934] updating node { 192.168.39.57 8443 v1.30.3 crio true true} ...
	I0805 23:10:31.658727   28839 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.30.3/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=ha-044175 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.57
	
	[Install]
	 config:
	{KubernetesVersion:v1.30.3 ClusterName:ha-044175 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0805 23:10:31.658810   28839 ssh_runner.go:195] Run: crio config
	I0805 23:10:31.705783   28839 cni.go:84] Creating CNI manager for ""
	I0805 23:10:31.705807   28839 cni.go:136] multinode detected (1 nodes found), recommending kindnet
	I0805 23:10:31.705819   28839 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0805 23:10:31.705846   28839 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.57 APIServerPort:8443 KubernetesVersion:v1.30.3 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:ha-044175 NodeName:ha-044175 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.57"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.57 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernetes/m
anifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0805 23:10:31.706000   28839 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.57
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "ha-044175"
	  kubeletExtraArgs:
	    node-ip: 192.168.39.57
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.57"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.30.3
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0805 23:10:31.706026   28839 kube-vip.go:115] generating kube-vip config ...
	I0805 23:10:31.706074   28839 ssh_runner.go:195] Run: sudo sh -c "modprobe --all ip_vs ip_vs_rr ip_vs_wrr ip_vs_sh nf_conntrack"
	I0805 23:10:31.722986   28839 kube-vip.go:167] auto-enabling control-plane load-balancing in kube-vip
	I0805 23:10:31.723118   28839 kube-vip.go:137] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_nodename
	      valueFrom:
	        fieldRef:
	          fieldPath: spec.nodeName
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 192.168.39.254
	    - name: prometheus_server
	      value: :2112
	    - name : lb_enable
	      value: "true"
	    - name: lb_port
	      value: "8443"
	    image: ghcr.io/kube-vip/kube-vip:v0.8.0
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/super-admin.conf"
	    name: kubeconfig
	status: {}
	I0805 23:10:31.723177   28839 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.30.3
	I0805 23:10:31.741961   28839 binaries.go:44] Found k8s binaries, skipping transfer
	I0805 23:10:31.742025   28839 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube /etc/kubernetes/manifests
	I0805 23:10:31.752136   28839 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (308 bytes)
	I0805 23:10:31.769564   28839 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0805 23:10:31.786741   28839 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2150 bytes)
	I0805 23:10:31.803558   28839 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1447 bytes)
	I0805 23:10:31.819843   28839 ssh_runner.go:195] Run: grep 192.168.39.254	control-plane.minikube.internal$ /etc/hosts
	I0805 23:10:31.823792   28839 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.254	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0805 23:10:31.836641   28839 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0805 23:10:31.952777   28839 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0805 23:10:31.971266   28839 certs.go:68] Setting up /home/jenkins/minikube-integration/19373-9606/.minikube/profiles/ha-044175 for IP: 192.168.39.57
	I0805 23:10:31.971288   28839 certs.go:194] generating shared ca certs ...
	I0805 23:10:31.971308   28839 certs.go:226] acquiring lock for ca certs: {Name:mkf35a042c1656d191f542eee7fa087aad4d29d8 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0805 23:10:31.971473   28839 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19373-9606/.minikube/ca.key
	I0805 23:10:31.971526   28839 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19373-9606/.minikube/proxy-client-ca.key
	I0805 23:10:31.971540   28839 certs.go:256] generating profile certs ...
	I0805 23:10:31.971600   28839 certs.go:363] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/19373-9606/.minikube/profiles/ha-044175/client.key
	I0805 23:10:31.971619   28839 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19373-9606/.minikube/profiles/ha-044175/client.crt with IP's: []
	I0805 23:10:32.186027   28839 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19373-9606/.minikube/profiles/ha-044175/client.crt ...
	I0805 23:10:32.186060   28839 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19373-9606/.minikube/profiles/ha-044175/client.crt: {Name:mk07f71c36a907c49015b5156e5111b3f5d0282b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0805 23:10:32.186230   28839 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19373-9606/.minikube/profiles/ha-044175/client.key ...
	I0805 23:10:32.186243   28839 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19373-9606/.minikube/profiles/ha-044175/client.key: {Name:mk2231a6094437615475c7cdb6cc571cd5b6ea01 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0805 23:10:32.186317   28839 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/19373-9606/.minikube/profiles/ha-044175/apiserver.key.5603f7db
	I0805 23:10:32.186332   28839 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19373-9606/.minikube/profiles/ha-044175/apiserver.crt.5603f7db with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.39.57 192.168.39.254]
	I0805 23:10:32.420262   28839 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19373-9606/.minikube/profiles/ha-044175/apiserver.crt.5603f7db ...
	I0805 23:10:32.420292   28839 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19373-9606/.minikube/profiles/ha-044175/apiserver.crt.5603f7db: {Name:mk1aaa2ceb51818492d02603eaad68351b66ea14 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0805 23:10:32.420466   28839 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19373-9606/.minikube/profiles/ha-044175/apiserver.key.5603f7db ...
	I0805 23:10:32.420481   28839 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19373-9606/.minikube/profiles/ha-044175/apiserver.key.5603f7db: {Name:mkc15366e9b5c5b24b06f390af1f821c8ba7678a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0805 23:10:32.420566   28839 certs.go:381] copying /home/jenkins/minikube-integration/19373-9606/.minikube/profiles/ha-044175/apiserver.crt.5603f7db -> /home/jenkins/minikube-integration/19373-9606/.minikube/profiles/ha-044175/apiserver.crt
	I0805 23:10:32.420652   28839 certs.go:385] copying /home/jenkins/minikube-integration/19373-9606/.minikube/profiles/ha-044175/apiserver.key.5603f7db -> /home/jenkins/minikube-integration/19373-9606/.minikube/profiles/ha-044175/apiserver.key
	I0805 23:10:32.420712   28839 certs.go:363] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/19373-9606/.minikube/profiles/ha-044175/proxy-client.key
	I0805 23:10:32.420728   28839 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19373-9606/.minikube/profiles/ha-044175/proxy-client.crt with IP's: []
	I0805 23:10:32.833235   28839 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19373-9606/.minikube/profiles/ha-044175/proxy-client.crt ...
	I0805 23:10:32.833268   28839 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19373-9606/.minikube/profiles/ha-044175/proxy-client.crt: {Name:mk8751657827ca3752a30f236a6f3fd31a4706b3 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0805 23:10:32.833425   28839 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19373-9606/.minikube/profiles/ha-044175/proxy-client.key ...
	I0805 23:10:32.833435   28839 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19373-9606/.minikube/profiles/ha-044175/proxy-client.key: {Name:mk06e9887e5410cb0aa672cd986ef1dfbc411de1 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0805 23:10:32.833498   28839 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19373-9606/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I0805 23:10:32.833515   28839 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19373-9606/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I0805 23:10:32.833526   28839 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19373-9606/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0805 23:10:32.833538   28839 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19373-9606/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0805 23:10:32.833551   28839 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19373-9606/.minikube/profiles/ha-044175/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0805 23:10:32.833563   28839 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19373-9606/.minikube/profiles/ha-044175/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0805 23:10:32.833575   28839 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19373-9606/.minikube/profiles/ha-044175/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0805 23:10:32.833587   28839 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19373-9606/.minikube/profiles/ha-044175/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0805 23:10:32.833633   28839 certs.go:484] found cert: /home/jenkins/minikube-integration/19373-9606/.minikube/certs/16792.pem (1338 bytes)
	W0805 23:10:32.833665   28839 certs.go:480] ignoring /home/jenkins/minikube-integration/19373-9606/.minikube/certs/16792_empty.pem, impossibly tiny 0 bytes
	I0805 23:10:32.833674   28839 certs.go:484] found cert: /home/jenkins/minikube-integration/19373-9606/.minikube/certs/ca-key.pem (1679 bytes)
	I0805 23:10:32.833696   28839 certs.go:484] found cert: /home/jenkins/minikube-integration/19373-9606/.minikube/certs/ca.pem (1082 bytes)
	I0805 23:10:32.833719   28839 certs.go:484] found cert: /home/jenkins/minikube-integration/19373-9606/.minikube/certs/cert.pem (1123 bytes)
	I0805 23:10:32.833739   28839 certs.go:484] found cert: /home/jenkins/minikube-integration/19373-9606/.minikube/certs/key.pem (1679 bytes)
	I0805 23:10:32.833777   28839 certs.go:484] found cert: /home/jenkins/minikube-integration/19373-9606/.minikube/files/etc/ssl/certs/167922.pem (1708 bytes)
	I0805 23:10:32.833803   28839 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19373-9606/.minikube/files/etc/ssl/certs/167922.pem -> /usr/share/ca-certificates/167922.pem
	I0805 23:10:32.833818   28839 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19373-9606/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0805 23:10:32.833831   28839 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19373-9606/.minikube/certs/16792.pem -> /usr/share/ca-certificates/16792.pem
	I0805 23:10:32.834399   28839 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19373-9606/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0805 23:10:32.879930   28839 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19373-9606/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0805 23:10:32.913975   28839 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19373-9606/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0805 23:10:32.944571   28839 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19373-9606/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0805 23:10:32.968983   28839 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19373-9606/.minikube/profiles/ha-044175/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I0805 23:10:32.993481   28839 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19373-9606/.minikube/profiles/ha-044175/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0805 23:10:33.017721   28839 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19373-9606/.minikube/profiles/ha-044175/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0805 23:10:33.042186   28839 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19373-9606/.minikube/profiles/ha-044175/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0805 23:10:33.066930   28839 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19373-9606/.minikube/files/etc/ssl/certs/167922.pem --> /usr/share/ca-certificates/167922.pem (1708 bytes)
	I0805 23:10:33.090808   28839 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19373-9606/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0805 23:10:33.115733   28839 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19373-9606/.minikube/certs/16792.pem --> /usr/share/ca-certificates/16792.pem (1338 bytes)
	I0805 23:10:33.139841   28839 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0805 23:10:33.156696   28839 ssh_runner.go:195] Run: openssl version
	I0805 23:10:33.162741   28839 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/167922.pem && ln -fs /usr/share/ca-certificates/167922.pem /etc/ssl/certs/167922.pem"
	I0805 23:10:33.174420   28839 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/167922.pem
	I0805 23:10:33.179263   28839 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Aug  5 23:03 /usr/share/ca-certificates/167922.pem
	I0805 23:10:33.179326   28839 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/167922.pem
	I0805 23:10:33.185184   28839 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/167922.pem /etc/ssl/certs/3ec20f2e.0"
	I0805 23:10:33.196451   28839 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0805 23:10:33.207630   28839 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0805 23:10:33.212291   28839 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Aug  5 22:50 /usr/share/ca-certificates/minikubeCA.pem
	I0805 23:10:33.212352   28839 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0805 23:10:33.218331   28839 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0805 23:10:33.230037   28839 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/16792.pem && ln -fs /usr/share/ca-certificates/16792.pem /etc/ssl/certs/16792.pem"
	I0805 23:10:33.241992   28839 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/16792.pem
	I0805 23:10:33.246697   28839 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Aug  5 23:03 /usr/share/ca-certificates/16792.pem
	I0805 23:10:33.246760   28839 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/16792.pem
	I0805 23:10:33.252598   28839 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/16792.pem /etc/ssl/certs/51391683.0"
	I0805 23:10:33.264770   28839 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0805 23:10:33.269421   28839 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0805 23:10:33.269493   28839 kubeadm.go:392] StartCluster: {Name:ha-044175 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19339/minikube-v1.33.1-1722248113-19339-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.3 Clust
erName:ha-044175 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.57 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 Mount
Type:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0805 23:10:33.269579   28839 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0805 23:10:33.269637   28839 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0805 23:10:33.308173   28839 cri.go:89] found id: ""
	I0805 23:10:33.308241   28839 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0805 23:10:33.319085   28839 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0805 23:10:33.329568   28839 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0805 23:10:33.339739   28839 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0805 23:10:33.339771   28839 kubeadm.go:157] found existing configuration files:
	
	I0805 23:10:33.339822   28839 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0805 23:10:33.349695   28839 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0805 23:10:33.349753   28839 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0805 23:10:33.360133   28839 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0805 23:10:33.369662   28839 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0805 23:10:33.369724   28839 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0805 23:10:33.379529   28839 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0805 23:10:33.389342   28839 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0805 23:10:33.389405   28839 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0805 23:10:33.399427   28839 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0805 23:10:33.408909   28839 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0805 23:10:33.408968   28839 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0805 23:10:33.418772   28839 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0805 23:10:33.523024   28839 kubeadm.go:310] [init] Using Kubernetes version: v1.30.3
	I0805 23:10:33.523150   28839 kubeadm.go:310] [preflight] Running pre-flight checks
	I0805 23:10:33.679643   28839 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0805 23:10:33.679812   28839 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0805 23:10:33.679952   28839 kubeadm.go:310] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0805 23:10:33.897514   28839 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0805 23:10:34.014253   28839 out.go:204]   - Generating certificates and keys ...
	I0805 23:10:34.014389   28839 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0805 23:10:34.014481   28839 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0805 23:10:34.044964   28839 kubeadm.go:310] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0805 23:10:34.226759   28839 kubeadm.go:310] [certs] Generating "front-proxy-ca" certificate and key
	I0805 23:10:34.392949   28839 kubeadm.go:310] [certs] Generating "front-proxy-client" certificate and key
	I0805 23:10:34.864847   28839 kubeadm.go:310] [certs] Generating "etcd/ca" certificate and key
	I0805 23:10:35.000955   28839 kubeadm.go:310] [certs] Generating "etcd/server" certificate and key
	I0805 23:10:35.001097   28839 kubeadm.go:310] [certs] etcd/server serving cert is signed for DNS names [ha-044175 localhost] and IPs [192.168.39.57 127.0.0.1 ::1]
	I0805 23:10:35.063745   28839 kubeadm.go:310] [certs] Generating "etcd/peer" certificate and key
	I0805 23:10:35.063887   28839 kubeadm.go:310] [certs] etcd/peer serving cert is signed for DNS names [ha-044175 localhost] and IPs [192.168.39.57 127.0.0.1 ::1]
	I0805 23:10:35.135024   28839 kubeadm.go:310] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0805 23:10:35.284912   28839 kubeadm.go:310] [certs] Generating "apiserver-etcd-client" certificate and key
	I0805 23:10:35.612132   28839 kubeadm.go:310] [certs] Generating "sa" key and public key
	I0805 23:10:35.612236   28839 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0805 23:10:35.793593   28839 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0805 23:10:36.142430   28839 kubeadm.go:310] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0805 23:10:36.298564   28839 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0805 23:10:36.518325   28839 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0805 23:10:36.593375   28839 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0805 23:10:36.593851   28839 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0805 23:10:36.598532   28839 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0805 23:10:36.600305   28839 out.go:204]   - Booting up control plane ...
	I0805 23:10:36.600408   28839 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0805 23:10:36.600475   28839 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0805 23:10:36.600569   28839 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0805 23:10:36.617451   28839 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0805 23:10:36.618440   28839 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0805 23:10:36.618483   28839 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0805 23:10:36.762821   28839 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I0805 23:10:36.762934   28839 kubeadm.go:310] [kubelet-check] Waiting for a healthy kubelet. This can take up to 4m0s
	I0805 23:10:37.763884   28839 kubeadm.go:310] [kubelet-check] The kubelet is healthy after 1.001788572s
	I0805 23:10:37.764010   28839 kubeadm.go:310] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I0805 23:10:43.677609   28839 kubeadm.go:310] [api-check] The API server is healthy after 5.91580752s
	I0805 23:10:43.695829   28839 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0805 23:10:43.708311   28839 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0805 23:10:44.240678   28839 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I0805 23:10:44.240936   28839 kubeadm.go:310] [mark-control-plane] Marking the node ha-044175 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0805 23:10:44.253385   28839 kubeadm.go:310] [bootstrap-token] Using token: 51mq8e.2hm5gpr21za1prtm
	I0805 23:10:44.254794   28839 out.go:204]   - Configuring RBAC rules ...
	I0805 23:10:44.254893   28839 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0805 23:10:44.260438   28839 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0805 23:10:44.280242   28839 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0805 23:10:44.288406   28839 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0805 23:10:44.292808   28839 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0805 23:10:44.296747   28839 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0805 23:10:44.310474   28839 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0805 23:10:44.581975   28839 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I0805 23:10:45.088834   28839 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I0805 23:10:45.088860   28839 kubeadm.go:310] 
	I0805 23:10:45.088918   28839 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I0805 23:10:45.088925   28839 kubeadm.go:310] 
	I0805 23:10:45.088997   28839 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I0805 23:10:45.089007   28839 kubeadm.go:310] 
	I0805 23:10:45.089043   28839 kubeadm.go:310]   mkdir -p $HOME/.kube
	I0805 23:10:45.089110   28839 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0805 23:10:45.089174   28839 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0805 23:10:45.089184   28839 kubeadm.go:310] 
	I0805 23:10:45.089254   28839 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I0805 23:10:45.089266   28839 kubeadm.go:310] 
	I0805 23:10:45.089305   28839 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0805 23:10:45.089346   28839 kubeadm.go:310] 
	I0805 23:10:45.089431   28839 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I0805 23:10:45.089547   28839 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0805 23:10:45.089663   28839 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0805 23:10:45.089683   28839 kubeadm.go:310] 
	I0805 23:10:45.089865   28839 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I0805 23:10:45.089991   28839 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I0805 23:10:45.090002   28839 kubeadm.go:310] 
	I0805 23:10:45.090115   28839 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8443 --token 51mq8e.2hm5gpr21za1prtm \
	I0805 23:10:45.090263   28839 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:80c3f4a7caafd825f47d5f536053424d1d775e8da247cc5594b6b717e711fcd3 \
	I0805 23:10:45.090288   28839 kubeadm.go:310] 	--control-plane 
	I0805 23:10:45.090302   28839 kubeadm.go:310] 
	I0805 23:10:45.090421   28839 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I0805 23:10:45.090431   28839 kubeadm.go:310] 
	I0805 23:10:45.090530   28839 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8443 --token 51mq8e.2hm5gpr21za1prtm \
	I0805 23:10:45.090702   28839 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:80c3f4a7caafd825f47d5f536053424d1d775e8da247cc5594b6b717e711fcd3 
	I0805 23:10:45.090850   28839 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0805 23:10:45.090872   28839 cni.go:84] Creating CNI manager for ""
	I0805 23:10:45.090880   28839 cni.go:136] multinode detected (1 nodes found), recommending kindnet
	I0805 23:10:45.092852   28839 out.go:177] * Configuring CNI (Container Networking Interface) ...
	I0805 23:10:45.094286   28839 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I0805 23:10:45.100225   28839 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.30.3/kubectl ...
	I0805 23:10:45.100244   28839 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2438 bytes)
	I0805 23:10:45.119568   28839 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I0805 23:10:45.530296   28839 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0805 23:10:45.530372   28839 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0805 23:10:45.530372   28839 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes ha-044175 minikube.k8s.io/updated_at=2024_08_05T23_10_45_0700 minikube.k8s.io/version=v1.33.1 minikube.k8s.io/commit=a179f531dd2dbe55e0d6074abcbc378280f91bb4 minikube.k8s.io/name=ha-044175 minikube.k8s.io/primary=true
	I0805 23:10:45.671664   28839 ops.go:34] apiserver oom_adj: -16
	I0805 23:10:45.672023   28839 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0805 23:10:46.172899   28839 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0805 23:10:46.672497   28839 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0805 23:10:47.173189   28839 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0805 23:10:47.672329   28839 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0805 23:10:48.172915   28839 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0805 23:10:48.672499   28839 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0805 23:10:49.172269   28839 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0805 23:10:49.672460   28839 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0805 23:10:50.172909   28839 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0805 23:10:50.672458   28839 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0805 23:10:51.172249   28839 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0805 23:10:51.672914   28839 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0805 23:10:52.172398   28839 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0805 23:10:52.672732   28839 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0805 23:10:53.172838   28839 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0805 23:10:53.672974   28839 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0805 23:10:54.172694   28839 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0805 23:10:54.672320   28839 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0805 23:10:55.172792   28839 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0805 23:10:55.672691   28839 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0805 23:10:56.172406   28839 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0805 23:10:56.672920   28839 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0805 23:10:57.172844   28839 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0805 23:10:57.275504   28839 kubeadm.go:1113] duration metric: took 11.745187263s to wait for elevateKubeSystemPrivileges
	I0805 23:10:57.275555   28839 kubeadm.go:394] duration metric: took 24.006065425s to StartCluster
	I0805 23:10:57.275610   28839 settings.go:142] acquiring lock: {Name:mkd43028f76794f43f4727efb0b77b9a49886053 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0805 23:10:57.275717   28839 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/19373-9606/kubeconfig
	I0805 23:10:57.276507   28839 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19373-9606/kubeconfig: {Name:mk4481c5dfe578449439dae4abf8681e1b7df535 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0805 23:10:57.276757   28839 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0805 23:10:57.276766   28839 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0805 23:10:57.276822   28839 addons.go:69] Setting storage-provisioner=true in profile "ha-044175"
	I0805 23:10:57.276835   28839 addons.go:69] Setting default-storageclass=true in profile "ha-044175"
	I0805 23:10:57.276854   28839 addons.go:234] Setting addon storage-provisioner=true in "ha-044175"
	I0805 23:10:57.276868   28839 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "ha-044175"
	I0805 23:10:57.276890   28839 host.go:66] Checking if "ha-044175" exists ...
	I0805 23:10:57.276751   28839 start.go:233] HA (multi-control plane) cluster: will skip waiting for primary control-plane node &{Name: IP:192.168.39.57 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0805 23:10:57.276946   28839 config.go:182] Loaded profile config "ha-044175": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0805 23:10:57.276962   28839 start.go:241] waiting for startup goroutines ...
	I0805 23:10:57.277272   28839 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0805 23:10:57.277307   28839 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0805 23:10:57.277327   28839 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0805 23:10:57.277352   28839 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0805 23:10:57.292517   28839 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35767
	I0805 23:10:57.292553   28839 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40049
	I0805 23:10:57.292968   28839 main.go:141] libmachine: () Calling .GetVersion
	I0805 23:10:57.292974   28839 main.go:141] libmachine: () Calling .GetVersion
	I0805 23:10:57.293506   28839 main.go:141] libmachine: Using API Version  1
	I0805 23:10:57.293534   28839 main.go:141] libmachine: () Calling .SetConfigRaw
	I0805 23:10:57.293659   28839 main.go:141] libmachine: Using API Version  1
	I0805 23:10:57.293683   28839 main.go:141] libmachine: () Calling .SetConfigRaw
	I0805 23:10:57.293948   28839 main.go:141] libmachine: () Calling .GetMachineName
	I0805 23:10:57.294073   28839 main.go:141] libmachine: () Calling .GetMachineName
	I0805 23:10:57.294249   28839 main.go:141] libmachine: (ha-044175) Calling .GetState
	I0805 23:10:57.294490   28839 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0805 23:10:57.294520   28839 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0805 23:10:57.296457   28839 loader.go:395] Config loaded from file:  /home/jenkins/minikube-integration/19373-9606/kubeconfig
	I0805 23:10:57.296834   28839 kapi.go:59] client config for ha-044175: &rest.Config{Host:"https://192.168.39.254:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/19373-9606/.minikube/profiles/ha-044175/client.crt", KeyFile:"/home/jenkins/minikube-integration/19373-9606/.minikube/profiles/ha-044175/client.key", CAFile:"/home/jenkins/minikube-integration/19373-9606/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(nil)},
UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1d02940), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0805 23:10:57.297473   28839 cert_rotation.go:137] Starting client certificate rotation controller
	I0805 23:10:57.297778   28839 addons.go:234] Setting addon default-storageclass=true in "ha-044175"
	I0805 23:10:57.297825   28839 host.go:66] Checking if "ha-044175" exists ...
	I0805 23:10:57.298216   28839 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0805 23:10:57.298248   28839 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0805 23:10:57.310071   28839 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36843
	I0805 23:10:57.310475   28839 main.go:141] libmachine: () Calling .GetVersion
	I0805 23:10:57.310970   28839 main.go:141] libmachine: Using API Version  1
	I0805 23:10:57.310997   28839 main.go:141] libmachine: () Calling .SetConfigRaw
	I0805 23:10:57.311316   28839 main.go:141] libmachine: () Calling .GetMachineName
	I0805 23:10:57.311517   28839 main.go:141] libmachine: (ha-044175) Calling .GetState
	I0805 23:10:57.312863   28839 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44167
	I0805 23:10:57.312905   28839 main.go:141] libmachine: (ha-044175) Calling .DriverName
	I0805 23:10:57.313336   28839 main.go:141] libmachine: () Calling .GetVersion
	I0805 23:10:57.313752   28839 main.go:141] libmachine: Using API Version  1
	I0805 23:10:57.313780   28839 main.go:141] libmachine: () Calling .SetConfigRaw
	I0805 23:10:57.314088   28839 main.go:141] libmachine: () Calling .GetMachineName
	I0805 23:10:57.314570   28839 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0805 23:10:57.314596   28839 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0805 23:10:57.315368   28839 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0805 23:10:57.316845   28839 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0805 23:10:57.316864   28839 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0805 23:10:57.316881   28839 main.go:141] libmachine: (ha-044175) Calling .GetSSHHostname
	I0805 23:10:57.319703   28839 main.go:141] libmachine: (ha-044175) DBG | domain ha-044175 has defined MAC address 52:54:00:d0:5f:e4 in network mk-ha-044175
	I0805 23:10:57.320093   28839 main.go:141] libmachine: (ha-044175) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d0:5f:e4", ip: ""} in network mk-ha-044175: {Iface:virbr1 ExpiryTime:2024-08-06 00:10:15 +0000 UTC Type:0 Mac:52:54:00:d0:5f:e4 Iaid: IPaddr:192.168.39.57 Prefix:24 Hostname:ha-044175 Clientid:01:52:54:00:d0:5f:e4}
	I0805 23:10:57.320115   28839 main.go:141] libmachine: (ha-044175) DBG | domain ha-044175 has defined IP address 192.168.39.57 and MAC address 52:54:00:d0:5f:e4 in network mk-ha-044175
	I0805 23:10:57.320374   28839 main.go:141] libmachine: (ha-044175) Calling .GetSSHPort
	I0805 23:10:57.320601   28839 main.go:141] libmachine: (ha-044175) Calling .GetSSHKeyPath
	I0805 23:10:57.320798   28839 main.go:141] libmachine: (ha-044175) Calling .GetSSHUsername
	I0805 23:10:57.320973   28839 sshutil.go:53] new ssh client: &{IP:192.168.39.57 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19373-9606/.minikube/machines/ha-044175/id_rsa Username:docker}
	I0805 23:10:57.330343   28839 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37261
	I0805 23:10:57.330721   28839 main.go:141] libmachine: () Calling .GetVersion
	I0805 23:10:57.331259   28839 main.go:141] libmachine: Using API Version  1
	I0805 23:10:57.331286   28839 main.go:141] libmachine: () Calling .SetConfigRaw
	I0805 23:10:57.331586   28839 main.go:141] libmachine: () Calling .GetMachineName
	I0805 23:10:57.331794   28839 main.go:141] libmachine: (ha-044175) Calling .GetState
	I0805 23:10:57.333476   28839 main.go:141] libmachine: (ha-044175) Calling .DriverName
	I0805 23:10:57.333692   28839 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I0805 23:10:57.333706   28839 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0805 23:10:57.333720   28839 main.go:141] libmachine: (ha-044175) Calling .GetSSHHostname
	I0805 23:10:57.336787   28839 main.go:141] libmachine: (ha-044175) DBG | domain ha-044175 has defined MAC address 52:54:00:d0:5f:e4 in network mk-ha-044175
	I0805 23:10:57.337284   28839 main.go:141] libmachine: (ha-044175) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d0:5f:e4", ip: ""} in network mk-ha-044175: {Iface:virbr1 ExpiryTime:2024-08-06 00:10:15 +0000 UTC Type:0 Mac:52:54:00:d0:5f:e4 Iaid: IPaddr:192.168.39.57 Prefix:24 Hostname:ha-044175 Clientid:01:52:54:00:d0:5f:e4}
	I0805 23:10:57.337314   28839 main.go:141] libmachine: (ha-044175) DBG | domain ha-044175 has defined IP address 192.168.39.57 and MAC address 52:54:00:d0:5f:e4 in network mk-ha-044175
	I0805 23:10:57.337462   28839 main.go:141] libmachine: (ha-044175) Calling .GetSSHPort
	I0805 23:10:57.337656   28839 main.go:141] libmachine: (ha-044175) Calling .GetSSHKeyPath
	I0805 23:10:57.337899   28839 main.go:141] libmachine: (ha-044175) Calling .GetSSHUsername
	I0805 23:10:57.338069   28839 sshutil.go:53] new ssh client: &{IP:192.168.39.57 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19373-9606/.minikube/machines/ha-044175/id_rsa Username:docker}
	I0805 23:10:57.429490   28839 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.39.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.30.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0805 23:10:57.496579   28839 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0805 23:10:57.528658   28839 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0805 23:10:58.062941   28839 start.go:971] {"host.minikube.internal": 192.168.39.1} host record injected into CoreDNS's ConfigMap
	I0805 23:10:58.303807   28839 main.go:141] libmachine: Making call to close driver server
	I0805 23:10:58.303837   28839 main.go:141] libmachine: (ha-044175) Calling .Close
	I0805 23:10:58.303836   28839 main.go:141] libmachine: Making call to close driver server
	I0805 23:10:58.303857   28839 main.go:141] libmachine: (ha-044175) Calling .Close
	I0805 23:10:58.304140   28839 main.go:141] libmachine: Successfully made call to close driver server
	I0805 23:10:58.304154   28839 main.go:141] libmachine: (ha-044175) DBG | Closing plugin on server side
	I0805 23:10:58.304157   28839 main.go:141] libmachine: Making call to close connection to plugin binary
	I0805 23:10:58.304169   28839 main.go:141] libmachine: Making call to close driver server
	I0805 23:10:58.304177   28839 main.go:141] libmachine: (ha-044175) Calling .Close
	I0805 23:10:58.304196   28839 main.go:141] libmachine: Successfully made call to close driver server
	I0805 23:10:58.304212   28839 main.go:141] libmachine: Making call to close connection to plugin binary
	I0805 23:10:58.304248   28839 main.go:141] libmachine: Making call to close driver server
	I0805 23:10:58.304286   28839 main.go:141] libmachine: (ha-044175) Calling .Close
	I0805 23:10:58.304214   28839 main.go:141] libmachine: (ha-044175) DBG | Closing plugin on server side
	I0805 23:10:58.304417   28839 main.go:141] libmachine: Successfully made call to close driver server
	I0805 23:10:58.304430   28839 main.go:141] libmachine: Making call to close connection to plugin binary
	I0805 23:10:58.304494   28839 main.go:141] libmachine: (ha-044175) DBG | Closing plugin on server side
	I0805 23:10:58.304505   28839 main.go:141] libmachine: Successfully made call to close driver server
	I0805 23:10:58.304516   28839 main.go:141] libmachine: Making call to close connection to plugin binary
	I0805 23:10:58.304683   28839 round_trippers.go:463] GET https://192.168.39.254:8443/apis/storage.k8s.io/v1/storageclasses
	I0805 23:10:58.304690   28839 round_trippers.go:469] Request Headers:
	I0805 23:10:58.304697   28839 round_trippers.go:473]     Accept: application/json, */*
	I0805 23:10:58.304701   28839 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0805 23:10:58.317901   28839 round_trippers.go:574] Response Status: 200 OK in 13 milliseconds
	I0805 23:10:58.319494   28839 round_trippers.go:463] PUT https://192.168.39.254:8443/apis/storage.k8s.io/v1/storageclasses/standard
	I0805 23:10:58.319513   28839 round_trippers.go:469] Request Headers:
	I0805 23:10:58.319524   28839 round_trippers.go:473]     Accept: application/json, */*
	I0805 23:10:58.319532   28839 round_trippers.go:473]     Content-Type: application/json
	I0805 23:10:58.319537   28839 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0805 23:10:58.321833   28839 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0805 23:10:58.321997   28839 main.go:141] libmachine: Making call to close driver server
	I0805 23:10:58.322014   28839 main.go:141] libmachine: (ha-044175) Calling .Close
	I0805 23:10:58.322347   28839 main.go:141] libmachine: (ha-044175) DBG | Closing plugin on server side
	I0805 23:10:58.322370   28839 main.go:141] libmachine: Successfully made call to close driver server
	I0805 23:10:58.322379   28839 main.go:141] libmachine: Making call to close connection to plugin binary
	I0805 23:10:58.324236   28839 out.go:177] * Enabled addons: storage-provisioner, default-storageclass
	I0805 23:10:58.325438   28839 addons.go:510] duration metric: took 1.048665605s for enable addons: enabled=[storage-provisioner default-storageclass]
	I0805 23:10:58.325495   28839 start.go:246] waiting for cluster config update ...
	I0805 23:10:58.325514   28839 start.go:255] writing updated cluster config ...
	I0805 23:10:58.327096   28839 out.go:177] 
	I0805 23:10:58.328594   28839 config.go:182] Loaded profile config "ha-044175": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0805 23:10:58.328685   28839 profile.go:143] Saving config to /home/jenkins/minikube-integration/19373-9606/.minikube/profiles/ha-044175/config.json ...
	I0805 23:10:58.330675   28839 out.go:177] * Starting "ha-044175-m02" control-plane node in "ha-044175" cluster
	I0805 23:10:58.332169   28839 preload.go:131] Checking if preload exists for k8s version v1.30.3 and runtime crio
	I0805 23:10:58.332199   28839 cache.go:56] Caching tarball of preloaded images
	I0805 23:10:58.332318   28839 preload.go:172] Found /home/jenkins/minikube-integration/19373-9606/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0805 23:10:58.332335   28839 cache.go:59] Finished verifying existence of preloaded tar for v1.30.3 on crio
	I0805 23:10:58.332425   28839 profile.go:143] Saving config to /home/jenkins/minikube-integration/19373-9606/.minikube/profiles/ha-044175/config.json ...
	I0805 23:10:58.332683   28839 start.go:360] acquireMachinesLock for ha-044175-m02: {Name:mkd2ba511c39504598222edbf83078b718329186 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0805 23:10:58.332751   28839 start.go:364] duration metric: took 38.793µs to acquireMachinesLock for "ha-044175-m02"
	I0805 23:10:58.332778   28839 start.go:93] Provisioning new machine with config: &{Name:ha-044175 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19339/minikube-v1.33.1-1722248113-19339-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kubernete
sVersion:v1.30.3 ClusterName:ha-044175 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.57 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 Cert
Expiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name:m02 IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0805 23:10:58.332926   28839 start.go:125] createHost starting for "m02" (driver="kvm2")
	I0805 23:10:58.334475   28839 out.go:204] * Creating kvm2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0805 23:10:58.334588   28839 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0805 23:10:58.334623   28839 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0805 23:10:58.349611   28839 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38127
	I0805 23:10:58.350167   28839 main.go:141] libmachine: () Calling .GetVersion
	I0805 23:10:58.350729   28839 main.go:141] libmachine: Using API Version  1
	I0805 23:10:58.350750   28839 main.go:141] libmachine: () Calling .SetConfigRaw
	I0805 23:10:58.351125   28839 main.go:141] libmachine: () Calling .GetMachineName
	I0805 23:10:58.351336   28839 main.go:141] libmachine: (ha-044175-m02) Calling .GetMachineName
	I0805 23:10:58.351515   28839 main.go:141] libmachine: (ha-044175-m02) Calling .DriverName
	I0805 23:10:58.351685   28839 start.go:159] libmachine.API.Create for "ha-044175" (driver="kvm2")
	I0805 23:10:58.351712   28839 client.go:168] LocalClient.Create starting
	I0805 23:10:58.351789   28839 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/19373-9606/.minikube/certs/ca.pem
	I0805 23:10:58.351877   28839 main.go:141] libmachine: Decoding PEM data...
	I0805 23:10:58.351913   28839 main.go:141] libmachine: Parsing certificate...
	I0805 23:10:58.351991   28839 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/19373-9606/.minikube/certs/cert.pem
	I0805 23:10:58.352020   28839 main.go:141] libmachine: Decoding PEM data...
	I0805 23:10:58.352036   28839 main.go:141] libmachine: Parsing certificate...
	I0805 23:10:58.352061   28839 main.go:141] libmachine: Running pre-create checks...
	I0805 23:10:58.352072   28839 main.go:141] libmachine: (ha-044175-m02) Calling .PreCreateCheck
	I0805 23:10:58.352302   28839 main.go:141] libmachine: (ha-044175-m02) Calling .GetConfigRaw
	I0805 23:10:58.352725   28839 main.go:141] libmachine: Creating machine...
	I0805 23:10:58.352741   28839 main.go:141] libmachine: (ha-044175-m02) Calling .Create
	I0805 23:10:58.352951   28839 main.go:141] libmachine: (ha-044175-m02) Creating KVM machine...
	I0805 23:10:58.354325   28839 main.go:141] libmachine: (ha-044175-m02) DBG | found existing default KVM network
	I0805 23:10:58.354597   28839 main.go:141] libmachine: (ha-044175-m02) DBG | found existing private KVM network mk-ha-044175
	I0805 23:10:58.354812   28839 main.go:141] libmachine: (ha-044175-m02) Setting up store path in /home/jenkins/minikube-integration/19373-9606/.minikube/machines/ha-044175-m02 ...
	I0805 23:10:58.354866   28839 main.go:141] libmachine: (ha-044175-m02) Building disk image from file:///home/jenkins/minikube-integration/19373-9606/.minikube/cache/iso/amd64/minikube-v1.33.1-1722248113-19339-amd64.iso
	I0805 23:10:58.354889   28839 main.go:141] libmachine: (ha-044175-m02) DBG | I0805 23:10:58.354789   29230 common.go:145] Making disk image using store path: /home/jenkins/minikube-integration/19373-9606/.minikube
	I0805 23:10:58.355017   28839 main.go:141] libmachine: (ha-044175-m02) Downloading /home/jenkins/minikube-integration/19373-9606/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/19373-9606/.minikube/cache/iso/amd64/minikube-v1.33.1-1722248113-19339-amd64.iso...
	I0805 23:10:58.586150   28839 main.go:141] libmachine: (ha-044175-m02) DBG | I0805 23:10:58.585975   29230 common.go:152] Creating ssh key: /home/jenkins/minikube-integration/19373-9606/.minikube/machines/ha-044175-m02/id_rsa...
	I0805 23:10:58.799311   28839 main.go:141] libmachine: (ha-044175-m02) DBG | I0805 23:10:58.799193   29230 common.go:158] Creating raw disk image: /home/jenkins/minikube-integration/19373-9606/.minikube/machines/ha-044175-m02/ha-044175-m02.rawdisk...
	I0805 23:10:58.799343   28839 main.go:141] libmachine: (ha-044175-m02) DBG | Writing magic tar header
	I0805 23:10:58.799362   28839 main.go:141] libmachine: (ha-044175-m02) DBG | Writing SSH key tar header
	I0805 23:10:58.799425   28839 main.go:141] libmachine: (ha-044175-m02) DBG | I0805 23:10:58.799355   29230 common.go:172] Fixing permissions on /home/jenkins/minikube-integration/19373-9606/.minikube/machines/ha-044175-m02 ...
	I0805 23:10:58.799483   28839 main.go:141] libmachine: (ha-044175-m02) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19373-9606/.minikube/machines/ha-044175-m02
	I0805 23:10:58.799505   28839 main.go:141] libmachine: (ha-044175-m02) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19373-9606/.minikube/machines
	I0805 23:10:58.799519   28839 main.go:141] libmachine: (ha-044175-m02) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19373-9606/.minikube
	I0805 23:10:58.799535   28839 main.go:141] libmachine: (ha-044175-m02) Setting executable bit set on /home/jenkins/minikube-integration/19373-9606/.minikube/machines/ha-044175-m02 (perms=drwx------)
	I0805 23:10:58.799549   28839 main.go:141] libmachine: (ha-044175-m02) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19373-9606
	I0805 23:10:58.799567   28839 main.go:141] libmachine: (ha-044175-m02) DBG | Checking permissions on dir: /home/jenkins/minikube-integration
	I0805 23:10:58.799578   28839 main.go:141] libmachine: (ha-044175-m02) DBG | Checking permissions on dir: /home/jenkins
	I0805 23:10:58.799591   28839 main.go:141] libmachine: (ha-044175-m02) DBG | Checking permissions on dir: /home
	I0805 23:10:58.799605   28839 main.go:141] libmachine: (ha-044175-m02) DBG | Skipping /home - not owner
	I0805 23:10:58.799622   28839 main.go:141] libmachine: (ha-044175-m02) Setting executable bit set on /home/jenkins/minikube-integration/19373-9606/.minikube/machines (perms=drwxr-xr-x)
	I0805 23:10:58.799640   28839 main.go:141] libmachine: (ha-044175-m02) Setting executable bit set on /home/jenkins/minikube-integration/19373-9606/.minikube (perms=drwxr-xr-x)
	I0805 23:10:58.799654   28839 main.go:141] libmachine: (ha-044175-m02) Setting executable bit set on /home/jenkins/minikube-integration/19373-9606 (perms=drwxrwxr-x)
	I0805 23:10:58.799671   28839 main.go:141] libmachine: (ha-044175-m02) Setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I0805 23:10:58.799683   28839 main.go:141] libmachine: (ha-044175-m02) Setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I0805 23:10:58.799694   28839 main.go:141] libmachine: (ha-044175-m02) Creating domain...
	I0805 23:10:58.800749   28839 main.go:141] libmachine: (ha-044175-m02) define libvirt domain using xml: 
	I0805 23:10:58.800770   28839 main.go:141] libmachine: (ha-044175-m02) <domain type='kvm'>
	I0805 23:10:58.800803   28839 main.go:141] libmachine: (ha-044175-m02)   <name>ha-044175-m02</name>
	I0805 23:10:58.800826   28839 main.go:141] libmachine: (ha-044175-m02)   <memory unit='MiB'>2200</memory>
	I0805 23:10:58.800839   28839 main.go:141] libmachine: (ha-044175-m02)   <vcpu>2</vcpu>
	I0805 23:10:58.800848   28839 main.go:141] libmachine: (ha-044175-m02)   <features>
	I0805 23:10:58.800859   28839 main.go:141] libmachine: (ha-044175-m02)     <acpi/>
	I0805 23:10:58.800880   28839 main.go:141] libmachine: (ha-044175-m02)     <apic/>
	I0805 23:10:58.800891   28839 main.go:141] libmachine: (ha-044175-m02)     <pae/>
	I0805 23:10:58.800898   28839 main.go:141] libmachine: (ha-044175-m02)     
	I0805 23:10:58.800909   28839 main.go:141] libmachine: (ha-044175-m02)   </features>
	I0805 23:10:58.800921   28839 main.go:141] libmachine: (ha-044175-m02)   <cpu mode='host-passthrough'>
	I0805 23:10:58.800932   28839 main.go:141] libmachine: (ha-044175-m02)   
	I0805 23:10:58.800942   28839 main.go:141] libmachine: (ha-044175-m02)   </cpu>
	I0805 23:10:58.800953   28839 main.go:141] libmachine: (ha-044175-m02)   <os>
	I0805 23:10:58.800963   28839 main.go:141] libmachine: (ha-044175-m02)     <type>hvm</type>
	I0805 23:10:58.800974   28839 main.go:141] libmachine: (ha-044175-m02)     <boot dev='cdrom'/>
	I0805 23:10:58.800984   28839 main.go:141] libmachine: (ha-044175-m02)     <boot dev='hd'/>
	I0805 23:10:58.800993   28839 main.go:141] libmachine: (ha-044175-m02)     <bootmenu enable='no'/>
	I0805 23:10:58.801002   28839 main.go:141] libmachine: (ha-044175-m02)   </os>
	I0805 23:10:58.801010   28839 main.go:141] libmachine: (ha-044175-m02)   <devices>
	I0805 23:10:58.801020   28839 main.go:141] libmachine: (ha-044175-m02)     <disk type='file' device='cdrom'>
	I0805 23:10:58.801036   28839 main.go:141] libmachine: (ha-044175-m02)       <source file='/home/jenkins/minikube-integration/19373-9606/.minikube/machines/ha-044175-m02/boot2docker.iso'/>
	I0805 23:10:58.801046   28839 main.go:141] libmachine: (ha-044175-m02)       <target dev='hdc' bus='scsi'/>
	I0805 23:10:58.801055   28839 main.go:141] libmachine: (ha-044175-m02)       <readonly/>
	I0805 23:10:58.801065   28839 main.go:141] libmachine: (ha-044175-m02)     </disk>
	I0805 23:10:58.801074   28839 main.go:141] libmachine: (ha-044175-m02)     <disk type='file' device='disk'>
	I0805 23:10:58.801086   28839 main.go:141] libmachine: (ha-044175-m02)       <driver name='qemu' type='raw' cache='default' io='threads' />
	I0805 23:10:58.801101   28839 main.go:141] libmachine: (ha-044175-m02)       <source file='/home/jenkins/minikube-integration/19373-9606/.minikube/machines/ha-044175-m02/ha-044175-m02.rawdisk'/>
	I0805 23:10:58.801111   28839 main.go:141] libmachine: (ha-044175-m02)       <target dev='hda' bus='virtio'/>
	I0805 23:10:58.801122   28839 main.go:141] libmachine: (ha-044175-m02)     </disk>
	I0805 23:10:58.801130   28839 main.go:141] libmachine: (ha-044175-m02)     <interface type='network'>
	I0805 23:10:58.801145   28839 main.go:141] libmachine: (ha-044175-m02)       <source network='mk-ha-044175'/>
	I0805 23:10:58.801155   28839 main.go:141] libmachine: (ha-044175-m02)       <model type='virtio'/>
	I0805 23:10:58.801162   28839 main.go:141] libmachine: (ha-044175-m02)     </interface>
	I0805 23:10:58.801175   28839 main.go:141] libmachine: (ha-044175-m02)     <interface type='network'>
	I0805 23:10:58.801183   28839 main.go:141] libmachine: (ha-044175-m02)       <source network='default'/>
	I0805 23:10:58.801193   28839 main.go:141] libmachine: (ha-044175-m02)       <model type='virtio'/>
	I0805 23:10:58.801205   28839 main.go:141] libmachine: (ha-044175-m02)     </interface>
	I0805 23:10:58.801214   28839 main.go:141] libmachine: (ha-044175-m02)     <serial type='pty'>
	I0805 23:10:58.801223   28839 main.go:141] libmachine: (ha-044175-m02)       <target port='0'/>
	I0805 23:10:58.801231   28839 main.go:141] libmachine: (ha-044175-m02)     </serial>
	I0805 23:10:58.801238   28839 main.go:141] libmachine: (ha-044175-m02)     <console type='pty'>
	I0805 23:10:58.801248   28839 main.go:141] libmachine: (ha-044175-m02)       <target type='serial' port='0'/>
	I0805 23:10:58.801256   28839 main.go:141] libmachine: (ha-044175-m02)     </console>
	I0805 23:10:58.801268   28839 main.go:141] libmachine: (ha-044175-m02)     <rng model='virtio'>
	I0805 23:10:58.801277   28839 main.go:141] libmachine: (ha-044175-m02)       <backend model='random'>/dev/random</backend>
	I0805 23:10:58.801285   28839 main.go:141] libmachine: (ha-044175-m02)     </rng>
	I0805 23:10:58.801293   28839 main.go:141] libmachine: (ha-044175-m02)     
	I0805 23:10:58.801301   28839 main.go:141] libmachine: (ha-044175-m02)     
	I0805 23:10:58.801308   28839 main.go:141] libmachine: (ha-044175-m02)   </devices>
	I0805 23:10:58.801318   28839 main.go:141] libmachine: (ha-044175-m02) </domain>
	I0805 23:10:58.801327   28839 main.go:141] libmachine: (ha-044175-m02) 
	I0805 23:10:58.807890   28839 main.go:141] libmachine: (ha-044175-m02) DBG | domain ha-044175-m02 has defined MAC address 52:54:00:99:c3:ce in network default
	I0805 23:10:58.808449   28839 main.go:141] libmachine: (ha-044175-m02) Ensuring networks are active...
	I0805 23:10:58.808501   28839 main.go:141] libmachine: (ha-044175-m02) DBG | domain ha-044175-m02 has defined MAC address 52:54:00:84:bb:47 in network mk-ha-044175
	I0805 23:10:58.809157   28839 main.go:141] libmachine: (ha-044175-m02) Ensuring network default is active
	I0805 23:10:58.809406   28839 main.go:141] libmachine: (ha-044175-m02) Ensuring network mk-ha-044175 is active
	I0805 23:10:58.809712   28839 main.go:141] libmachine: (ha-044175-m02) Getting domain xml...
	I0805 23:10:58.810292   28839 main.go:141] libmachine: (ha-044175-m02) Creating domain...
	I0805 23:11:00.027893   28839 main.go:141] libmachine: (ha-044175-m02) Waiting to get IP...
	I0805 23:11:00.028630   28839 main.go:141] libmachine: (ha-044175-m02) DBG | domain ha-044175-m02 has defined MAC address 52:54:00:84:bb:47 in network mk-ha-044175
	I0805 23:11:00.029203   28839 main.go:141] libmachine: (ha-044175-m02) DBG | unable to find current IP address of domain ha-044175-m02 in network mk-ha-044175
	I0805 23:11:00.029235   28839 main.go:141] libmachine: (ha-044175-m02) DBG | I0805 23:11:00.029161   29230 retry.go:31] will retry after 248.488515ms: waiting for machine to come up
	I0805 23:11:00.279766   28839 main.go:141] libmachine: (ha-044175-m02) DBG | domain ha-044175-m02 has defined MAC address 52:54:00:84:bb:47 in network mk-ha-044175
	I0805 23:11:00.280307   28839 main.go:141] libmachine: (ha-044175-m02) DBG | unable to find current IP address of domain ha-044175-m02 in network mk-ha-044175
	I0805 23:11:00.280335   28839 main.go:141] libmachine: (ha-044175-m02) DBG | I0805 23:11:00.280249   29230 retry.go:31] will retry after 355.99083ms: waiting for machine to come up
	I0805 23:11:00.638118   28839 main.go:141] libmachine: (ha-044175-m02) DBG | domain ha-044175-m02 has defined MAC address 52:54:00:84:bb:47 in network mk-ha-044175
	I0805 23:11:00.638625   28839 main.go:141] libmachine: (ha-044175-m02) DBG | unable to find current IP address of domain ha-044175-m02 in network mk-ha-044175
	I0805 23:11:00.638652   28839 main.go:141] libmachine: (ha-044175-m02) DBG | I0805 23:11:00.638592   29230 retry.go:31] will retry after 297.161612ms: waiting for machine to come up
	I0805 23:11:00.937132   28839 main.go:141] libmachine: (ha-044175-m02) DBG | domain ha-044175-m02 has defined MAC address 52:54:00:84:bb:47 in network mk-ha-044175
	I0805 23:11:00.937612   28839 main.go:141] libmachine: (ha-044175-m02) DBG | unable to find current IP address of domain ha-044175-m02 in network mk-ha-044175
	I0805 23:11:00.937643   28839 main.go:141] libmachine: (ha-044175-m02) DBG | I0805 23:11:00.937553   29230 retry.go:31] will retry after 401.402039ms: waiting for machine to come up
	I0805 23:11:01.340305   28839 main.go:141] libmachine: (ha-044175-m02) DBG | domain ha-044175-m02 has defined MAC address 52:54:00:84:bb:47 in network mk-ha-044175
	I0805 23:11:01.340858   28839 main.go:141] libmachine: (ha-044175-m02) DBG | unable to find current IP address of domain ha-044175-m02 in network mk-ha-044175
	I0805 23:11:01.340884   28839 main.go:141] libmachine: (ha-044175-m02) DBG | I0805 23:11:01.340832   29230 retry.go:31] will retry after 485.040791ms: waiting for machine to come up
	I0805 23:11:01.827501   28839 main.go:141] libmachine: (ha-044175-m02) DBG | domain ha-044175-m02 has defined MAC address 52:54:00:84:bb:47 in network mk-ha-044175
	I0805 23:11:01.827967   28839 main.go:141] libmachine: (ha-044175-m02) DBG | unable to find current IP address of domain ha-044175-m02 in network mk-ha-044175
	I0805 23:11:01.827991   28839 main.go:141] libmachine: (ha-044175-m02) DBG | I0805 23:11:01.827903   29230 retry.go:31] will retry after 934.253059ms: waiting for machine to come up
	I0805 23:11:02.764170   28839 main.go:141] libmachine: (ha-044175-m02) DBG | domain ha-044175-m02 has defined MAC address 52:54:00:84:bb:47 in network mk-ha-044175
	I0805 23:11:02.764627   28839 main.go:141] libmachine: (ha-044175-m02) DBG | unable to find current IP address of domain ha-044175-m02 in network mk-ha-044175
	I0805 23:11:02.764689   28839 main.go:141] libmachine: (ha-044175-m02) DBG | I0805 23:11:02.764598   29230 retry.go:31] will retry after 896.946537ms: waiting for machine to come up
	I0805 23:11:03.663096   28839 main.go:141] libmachine: (ha-044175-m02) DBG | domain ha-044175-m02 has defined MAC address 52:54:00:84:bb:47 in network mk-ha-044175
	I0805 23:11:03.663641   28839 main.go:141] libmachine: (ha-044175-m02) DBG | unable to find current IP address of domain ha-044175-m02 in network mk-ha-044175
	I0805 23:11:03.663673   28839 main.go:141] libmachine: (ha-044175-m02) DBG | I0805 23:11:03.663581   29230 retry.go:31] will retry after 923.400753ms: waiting for machine to come up
	I0805 23:11:04.588190   28839 main.go:141] libmachine: (ha-044175-m02) DBG | domain ha-044175-m02 has defined MAC address 52:54:00:84:bb:47 in network mk-ha-044175
	I0805 23:11:04.588678   28839 main.go:141] libmachine: (ha-044175-m02) DBG | unable to find current IP address of domain ha-044175-m02 in network mk-ha-044175
	I0805 23:11:04.588713   28839 main.go:141] libmachine: (ha-044175-m02) DBG | I0805 23:11:04.588629   29230 retry.go:31] will retry after 1.43340992s: waiting for machine to come up
	I0805 23:11:06.024240   28839 main.go:141] libmachine: (ha-044175-m02) DBG | domain ha-044175-m02 has defined MAC address 52:54:00:84:bb:47 in network mk-ha-044175
	I0805 23:11:06.024737   28839 main.go:141] libmachine: (ha-044175-m02) DBG | unable to find current IP address of domain ha-044175-m02 in network mk-ha-044175
	I0805 23:11:06.024773   28839 main.go:141] libmachine: (ha-044175-m02) DBG | I0805 23:11:06.024699   29230 retry.go:31] will retry after 1.530394502s: waiting for machine to come up
	I0805 23:11:07.556260   28839 main.go:141] libmachine: (ha-044175-m02) DBG | domain ha-044175-m02 has defined MAC address 52:54:00:84:bb:47 in network mk-ha-044175
	I0805 23:11:07.556768   28839 main.go:141] libmachine: (ha-044175-m02) DBG | unable to find current IP address of domain ha-044175-m02 in network mk-ha-044175
	I0805 23:11:07.556795   28839 main.go:141] libmachine: (ha-044175-m02) DBG | I0805 23:11:07.556712   29230 retry.go:31] will retry after 2.88336861s: waiting for machine to come up
	I0805 23:11:10.441210   28839 main.go:141] libmachine: (ha-044175-m02) DBG | domain ha-044175-m02 has defined MAC address 52:54:00:84:bb:47 in network mk-ha-044175
	I0805 23:11:10.441647   28839 main.go:141] libmachine: (ha-044175-m02) DBG | unable to find current IP address of domain ha-044175-m02 in network mk-ha-044175
	I0805 23:11:10.441678   28839 main.go:141] libmachine: (ha-044175-m02) DBG | I0805 23:11:10.441597   29230 retry.go:31] will retry after 3.081446368s: waiting for machine to come up
	I0805 23:11:13.525137   28839 main.go:141] libmachine: (ha-044175-m02) DBG | domain ha-044175-m02 has defined MAC address 52:54:00:84:bb:47 in network mk-ha-044175
	I0805 23:11:13.525456   28839 main.go:141] libmachine: (ha-044175-m02) DBG | unable to find current IP address of domain ha-044175-m02 in network mk-ha-044175
	I0805 23:11:13.525498   28839 main.go:141] libmachine: (ha-044175-m02) DBG | I0805 23:11:13.525431   29230 retry.go:31] will retry after 4.471112661s: waiting for machine to come up
	I0805 23:11:18.000407   28839 main.go:141] libmachine: (ha-044175-m02) DBG | domain ha-044175-m02 has defined MAC address 52:54:00:84:bb:47 in network mk-ha-044175
	I0805 23:11:18.000819   28839 main.go:141] libmachine: (ha-044175-m02) DBG | unable to find current IP address of domain ha-044175-m02 in network mk-ha-044175
	I0805 23:11:18.000837   28839 main.go:141] libmachine: (ha-044175-m02) DBG | I0805 23:11:18.000779   29230 retry.go:31] will retry after 5.282329341s: waiting for machine to come up
	I0805 23:11:23.288261   28839 main.go:141] libmachine: (ha-044175-m02) DBG | domain ha-044175-m02 has defined MAC address 52:54:00:84:bb:47 in network mk-ha-044175
	I0805 23:11:23.288807   28839 main.go:141] libmachine: (ha-044175-m02) Found IP for machine: 192.168.39.112
	I0805 23:11:23.288835   28839 main.go:141] libmachine: (ha-044175-m02) DBG | domain ha-044175-m02 has current primary IP address 192.168.39.112 and MAC address 52:54:00:84:bb:47 in network mk-ha-044175
	I0805 23:11:23.288844   28839 main.go:141] libmachine: (ha-044175-m02) Reserving static IP address...
	I0805 23:11:23.289387   28839 main.go:141] libmachine: (ha-044175-m02) DBG | unable to find host DHCP lease matching {name: "ha-044175-m02", mac: "52:54:00:84:bb:47", ip: "192.168.39.112"} in network mk-ha-044175
	I0805 23:11:23.364345   28839 main.go:141] libmachine: (ha-044175-m02) DBG | Getting to WaitForSSH function...
	I0805 23:11:23.364386   28839 main.go:141] libmachine: (ha-044175-m02) Reserved static IP address: 192.168.39.112
	I0805 23:11:23.364401   28839 main.go:141] libmachine: (ha-044175-m02) Waiting for SSH to be available...
	I0805 23:11:23.366926   28839 main.go:141] libmachine: (ha-044175-m02) DBG | domain ha-044175-m02 has defined MAC address 52:54:00:84:bb:47 in network mk-ha-044175
	I0805 23:11:23.367273   28839 main.go:141] libmachine: (ha-044175-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:84:bb:47", ip: ""} in network mk-ha-044175: {Iface:virbr1 ExpiryTime:2024-08-06 00:11:13 +0000 UTC Type:0 Mac:52:54:00:84:bb:47 Iaid: IPaddr:192.168.39.112 Prefix:24 Hostname:minikube Clientid:01:52:54:00:84:bb:47}
	I0805 23:11:23.367305   28839 main.go:141] libmachine: (ha-044175-m02) DBG | domain ha-044175-m02 has defined IP address 192.168.39.112 and MAC address 52:54:00:84:bb:47 in network mk-ha-044175
	I0805 23:11:23.367491   28839 main.go:141] libmachine: (ha-044175-m02) DBG | Using SSH client type: external
	I0805 23:11:23.367512   28839 main.go:141] libmachine: (ha-044175-m02) DBG | Using SSH private key: /home/jenkins/minikube-integration/19373-9606/.minikube/machines/ha-044175-m02/id_rsa (-rw-------)
	I0805 23:11:23.367541   28839 main.go:141] libmachine: (ha-044175-m02) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.112 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19373-9606/.minikube/machines/ha-044175-m02/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0805 23:11:23.367559   28839 main.go:141] libmachine: (ha-044175-m02) DBG | About to run SSH command:
	I0805 23:11:23.367579   28839 main.go:141] libmachine: (ha-044175-m02) DBG | exit 0
	I0805 23:11:23.496309   28839 main.go:141] libmachine: (ha-044175-m02) DBG | SSH cmd err, output: <nil>: 
	I0805 23:11:23.496557   28839 main.go:141] libmachine: (ha-044175-m02) KVM machine creation complete!
	I0805 23:11:23.496917   28839 main.go:141] libmachine: (ha-044175-m02) Calling .GetConfigRaw
	I0805 23:11:23.497407   28839 main.go:141] libmachine: (ha-044175-m02) Calling .DriverName
	I0805 23:11:23.497585   28839 main.go:141] libmachine: (ha-044175-m02) Calling .DriverName
	I0805 23:11:23.497727   28839 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I0805 23:11:23.497741   28839 main.go:141] libmachine: (ha-044175-m02) Calling .GetState
	I0805 23:11:23.499071   28839 main.go:141] libmachine: Detecting operating system of created instance...
	I0805 23:11:23.499103   28839 main.go:141] libmachine: Waiting for SSH to be available...
	I0805 23:11:23.499111   28839 main.go:141] libmachine: Getting to WaitForSSH function...
	I0805 23:11:23.499122   28839 main.go:141] libmachine: (ha-044175-m02) Calling .GetSSHHostname
	I0805 23:11:23.501648   28839 main.go:141] libmachine: (ha-044175-m02) DBG | domain ha-044175-m02 has defined MAC address 52:54:00:84:bb:47 in network mk-ha-044175
	I0805 23:11:23.502021   28839 main.go:141] libmachine: (ha-044175-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:84:bb:47", ip: ""} in network mk-ha-044175: {Iface:virbr1 ExpiryTime:2024-08-06 00:11:13 +0000 UTC Type:0 Mac:52:54:00:84:bb:47 Iaid: IPaddr:192.168.39.112 Prefix:24 Hostname:ha-044175-m02 Clientid:01:52:54:00:84:bb:47}
	I0805 23:11:23.502050   28839 main.go:141] libmachine: (ha-044175-m02) DBG | domain ha-044175-m02 has defined IP address 192.168.39.112 and MAC address 52:54:00:84:bb:47 in network mk-ha-044175
	I0805 23:11:23.502161   28839 main.go:141] libmachine: (ha-044175-m02) Calling .GetSSHPort
	I0805 23:11:23.502340   28839 main.go:141] libmachine: (ha-044175-m02) Calling .GetSSHKeyPath
	I0805 23:11:23.502515   28839 main.go:141] libmachine: (ha-044175-m02) Calling .GetSSHKeyPath
	I0805 23:11:23.502637   28839 main.go:141] libmachine: (ha-044175-m02) Calling .GetSSHUsername
	I0805 23:11:23.502830   28839 main.go:141] libmachine: Using SSH client type: native
	I0805 23:11:23.503019   28839 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.112 22 <nil> <nil>}
	I0805 23:11:23.503032   28839 main.go:141] libmachine: About to run SSH command:
	exit 0
	I0805 23:11:23.610545   28839 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0805 23:11:23.610566   28839 main.go:141] libmachine: Detecting the provisioner...
	I0805 23:11:23.610576   28839 main.go:141] libmachine: (ha-044175-m02) Calling .GetSSHHostname
	I0805 23:11:23.613574   28839 main.go:141] libmachine: (ha-044175-m02) DBG | domain ha-044175-m02 has defined MAC address 52:54:00:84:bb:47 in network mk-ha-044175
	I0805 23:11:23.613964   28839 main.go:141] libmachine: (ha-044175-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:84:bb:47", ip: ""} in network mk-ha-044175: {Iface:virbr1 ExpiryTime:2024-08-06 00:11:13 +0000 UTC Type:0 Mac:52:54:00:84:bb:47 Iaid: IPaddr:192.168.39.112 Prefix:24 Hostname:ha-044175-m02 Clientid:01:52:54:00:84:bb:47}
	I0805 23:11:23.613993   28839 main.go:141] libmachine: (ha-044175-m02) DBG | domain ha-044175-m02 has defined IP address 192.168.39.112 and MAC address 52:54:00:84:bb:47 in network mk-ha-044175
	I0805 23:11:23.614108   28839 main.go:141] libmachine: (ha-044175-m02) Calling .GetSSHPort
	I0805 23:11:23.614273   28839 main.go:141] libmachine: (ha-044175-m02) Calling .GetSSHKeyPath
	I0805 23:11:23.614473   28839 main.go:141] libmachine: (ha-044175-m02) Calling .GetSSHKeyPath
	I0805 23:11:23.614639   28839 main.go:141] libmachine: (ha-044175-m02) Calling .GetSSHUsername
	I0805 23:11:23.614851   28839 main.go:141] libmachine: Using SSH client type: native
	I0805 23:11:23.615022   28839 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.112 22 <nil> <nil>}
	I0805 23:11:23.615035   28839 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I0805 23:11:23.720105   28839 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2023.02.9-dirty
	ID=buildroot
	VERSION_ID=2023.02.9
	PRETTY_NAME="Buildroot 2023.02.9"
	
	I0805 23:11:23.720191   28839 main.go:141] libmachine: found compatible host: buildroot
	I0805 23:11:23.720204   28839 main.go:141] libmachine: Provisioning with buildroot...
	I0805 23:11:23.720211   28839 main.go:141] libmachine: (ha-044175-m02) Calling .GetMachineName
	I0805 23:11:23.720443   28839 buildroot.go:166] provisioning hostname "ha-044175-m02"
	I0805 23:11:23.720464   28839 main.go:141] libmachine: (ha-044175-m02) Calling .GetMachineName
	I0805 23:11:23.720655   28839 main.go:141] libmachine: (ha-044175-m02) Calling .GetSSHHostname
	I0805 23:11:23.723447   28839 main.go:141] libmachine: (ha-044175-m02) DBG | domain ha-044175-m02 has defined MAC address 52:54:00:84:bb:47 in network mk-ha-044175
	I0805 23:11:23.723826   28839 main.go:141] libmachine: (ha-044175-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:84:bb:47", ip: ""} in network mk-ha-044175: {Iface:virbr1 ExpiryTime:2024-08-06 00:11:13 +0000 UTC Type:0 Mac:52:54:00:84:bb:47 Iaid: IPaddr:192.168.39.112 Prefix:24 Hostname:ha-044175-m02 Clientid:01:52:54:00:84:bb:47}
	I0805 23:11:23.723846   28839 main.go:141] libmachine: (ha-044175-m02) DBG | domain ha-044175-m02 has defined IP address 192.168.39.112 and MAC address 52:54:00:84:bb:47 in network mk-ha-044175
	I0805 23:11:23.724037   28839 main.go:141] libmachine: (ha-044175-m02) Calling .GetSSHPort
	I0805 23:11:23.724209   28839 main.go:141] libmachine: (ha-044175-m02) Calling .GetSSHKeyPath
	I0805 23:11:23.724357   28839 main.go:141] libmachine: (ha-044175-m02) Calling .GetSSHKeyPath
	I0805 23:11:23.724504   28839 main.go:141] libmachine: (ha-044175-m02) Calling .GetSSHUsername
	I0805 23:11:23.724673   28839 main.go:141] libmachine: Using SSH client type: native
	I0805 23:11:23.724850   28839 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.112 22 <nil> <nil>}
	I0805 23:11:23.724862   28839 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-044175-m02 && echo "ha-044175-m02" | sudo tee /etc/hostname
	I0805 23:11:23.848323   28839 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-044175-m02
	
	I0805 23:11:23.848360   28839 main.go:141] libmachine: (ha-044175-m02) Calling .GetSSHHostname
	I0805 23:11:23.851291   28839 main.go:141] libmachine: (ha-044175-m02) DBG | domain ha-044175-m02 has defined MAC address 52:54:00:84:bb:47 in network mk-ha-044175
	I0805 23:11:23.851733   28839 main.go:141] libmachine: (ha-044175-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:84:bb:47", ip: ""} in network mk-ha-044175: {Iface:virbr1 ExpiryTime:2024-08-06 00:11:13 +0000 UTC Type:0 Mac:52:54:00:84:bb:47 Iaid: IPaddr:192.168.39.112 Prefix:24 Hostname:ha-044175-m02 Clientid:01:52:54:00:84:bb:47}
	I0805 23:11:23.851771   28839 main.go:141] libmachine: (ha-044175-m02) DBG | domain ha-044175-m02 has defined IP address 192.168.39.112 and MAC address 52:54:00:84:bb:47 in network mk-ha-044175
	I0805 23:11:23.851921   28839 main.go:141] libmachine: (ha-044175-m02) Calling .GetSSHPort
	I0805 23:11:23.852127   28839 main.go:141] libmachine: (ha-044175-m02) Calling .GetSSHKeyPath
	I0805 23:11:23.852398   28839 main.go:141] libmachine: (ha-044175-m02) Calling .GetSSHKeyPath
	I0805 23:11:23.852545   28839 main.go:141] libmachine: (ha-044175-m02) Calling .GetSSHUsername
	I0805 23:11:23.852769   28839 main.go:141] libmachine: Using SSH client type: native
	I0805 23:11:23.852973   28839 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.112 22 <nil> <nil>}
	I0805 23:11:23.852990   28839 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-044175-m02' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-044175-m02/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-044175-m02' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0805 23:11:23.968097   28839 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0805 23:11:23.968135   28839 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19373-9606/.minikube CaCertPath:/home/jenkins/minikube-integration/19373-9606/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19373-9606/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19373-9606/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19373-9606/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19373-9606/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19373-9606/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19373-9606/.minikube}
	I0805 23:11:23.968155   28839 buildroot.go:174] setting up certificates
	I0805 23:11:23.968169   28839 provision.go:84] configureAuth start
	I0805 23:11:23.968178   28839 main.go:141] libmachine: (ha-044175-m02) Calling .GetMachineName
	I0805 23:11:23.968428   28839 main.go:141] libmachine: (ha-044175-m02) Calling .GetIP
	I0805 23:11:23.971503   28839 main.go:141] libmachine: (ha-044175-m02) DBG | domain ha-044175-m02 has defined MAC address 52:54:00:84:bb:47 in network mk-ha-044175
	I0805 23:11:23.971906   28839 main.go:141] libmachine: (ha-044175-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:84:bb:47", ip: ""} in network mk-ha-044175: {Iface:virbr1 ExpiryTime:2024-08-06 00:11:13 +0000 UTC Type:0 Mac:52:54:00:84:bb:47 Iaid: IPaddr:192.168.39.112 Prefix:24 Hostname:ha-044175-m02 Clientid:01:52:54:00:84:bb:47}
	I0805 23:11:23.971937   28839 main.go:141] libmachine: (ha-044175-m02) DBG | domain ha-044175-m02 has defined IP address 192.168.39.112 and MAC address 52:54:00:84:bb:47 in network mk-ha-044175
	I0805 23:11:23.972129   28839 main.go:141] libmachine: (ha-044175-m02) Calling .GetSSHHostname
	I0805 23:11:23.974453   28839 main.go:141] libmachine: (ha-044175-m02) DBG | domain ha-044175-m02 has defined MAC address 52:54:00:84:bb:47 in network mk-ha-044175
	I0805 23:11:23.974801   28839 main.go:141] libmachine: (ha-044175-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:84:bb:47", ip: ""} in network mk-ha-044175: {Iface:virbr1 ExpiryTime:2024-08-06 00:11:13 +0000 UTC Type:0 Mac:52:54:00:84:bb:47 Iaid: IPaddr:192.168.39.112 Prefix:24 Hostname:ha-044175-m02 Clientid:01:52:54:00:84:bb:47}
	I0805 23:11:23.974857   28839 main.go:141] libmachine: (ha-044175-m02) DBG | domain ha-044175-m02 has defined IP address 192.168.39.112 and MAC address 52:54:00:84:bb:47 in network mk-ha-044175
	I0805 23:11:23.974940   28839 provision.go:143] copyHostCerts
	I0805 23:11:23.974985   28839 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19373-9606/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/19373-9606/.minikube/ca.pem
	I0805 23:11:23.975026   28839 exec_runner.go:144] found /home/jenkins/minikube-integration/19373-9606/.minikube/ca.pem, removing ...
	I0805 23:11:23.975063   28839 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19373-9606/.minikube/ca.pem
	I0805 23:11:23.975147   28839 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19373-9606/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19373-9606/.minikube/ca.pem (1082 bytes)
	I0805 23:11:23.975240   28839 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19373-9606/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/19373-9606/.minikube/cert.pem
	I0805 23:11:23.975265   28839 exec_runner.go:144] found /home/jenkins/minikube-integration/19373-9606/.minikube/cert.pem, removing ...
	I0805 23:11:23.975275   28839 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19373-9606/.minikube/cert.pem
	I0805 23:11:23.975313   28839 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19373-9606/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19373-9606/.minikube/cert.pem (1123 bytes)
	I0805 23:11:23.975468   28839 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19373-9606/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/19373-9606/.minikube/key.pem
	I0805 23:11:23.975498   28839 exec_runner.go:144] found /home/jenkins/minikube-integration/19373-9606/.minikube/key.pem, removing ...
	I0805 23:11:23.975506   28839 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19373-9606/.minikube/key.pem
	I0805 23:11:23.975588   28839 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19373-9606/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19373-9606/.minikube/key.pem (1679 bytes)
	I0805 23:11:23.975686   28839 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19373-9606/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19373-9606/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19373-9606/.minikube/certs/ca-key.pem org=jenkins.ha-044175-m02 san=[127.0.0.1 192.168.39.112 ha-044175-m02 localhost minikube]
	I0805 23:11:24.361457   28839 provision.go:177] copyRemoteCerts
	I0805 23:11:24.361520   28839 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0805 23:11:24.361549   28839 main.go:141] libmachine: (ha-044175-m02) Calling .GetSSHHostname
	I0805 23:11:24.364017   28839 main.go:141] libmachine: (ha-044175-m02) DBG | domain ha-044175-m02 has defined MAC address 52:54:00:84:bb:47 in network mk-ha-044175
	I0805 23:11:24.364429   28839 main.go:141] libmachine: (ha-044175-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:84:bb:47", ip: ""} in network mk-ha-044175: {Iface:virbr1 ExpiryTime:2024-08-06 00:11:13 +0000 UTC Type:0 Mac:52:54:00:84:bb:47 Iaid: IPaddr:192.168.39.112 Prefix:24 Hostname:ha-044175-m02 Clientid:01:52:54:00:84:bb:47}
	I0805 23:11:24.364462   28839 main.go:141] libmachine: (ha-044175-m02) DBG | domain ha-044175-m02 has defined IP address 192.168.39.112 and MAC address 52:54:00:84:bb:47 in network mk-ha-044175
	I0805 23:11:24.364598   28839 main.go:141] libmachine: (ha-044175-m02) Calling .GetSSHPort
	I0805 23:11:24.364831   28839 main.go:141] libmachine: (ha-044175-m02) Calling .GetSSHKeyPath
	I0805 23:11:24.364995   28839 main.go:141] libmachine: (ha-044175-m02) Calling .GetSSHUsername
	I0805 23:11:24.365133   28839 sshutil.go:53] new ssh client: &{IP:192.168.39.112 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19373-9606/.minikube/machines/ha-044175-m02/id_rsa Username:docker}
	I0805 23:11:24.450484   28839 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19373-9606/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0805 23:11:24.450573   28839 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19373-9606/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0805 23:11:24.474841   28839 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19373-9606/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0805 23:11:24.474907   28839 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19373-9606/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0805 23:11:24.500328   28839 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19373-9606/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0805 23:11:24.500404   28839 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19373-9606/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I0805 23:11:24.523694   28839 provision.go:87] duration metric: took 555.511879ms to configureAuth
	I0805 23:11:24.523728   28839 buildroot.go:189] setting minikube options for container-runtime
	I0805 23:11:24.523925   28839 config.go:182] Loaded profile config "ha-044175": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0805 23:11:24.524010   28839 main.go:141] libmachine: (ha-044175-m02) Calling .GetSSHHostname
	I0805 23:11:24.526859   28839 main.go:141] libmachine: (ha-044175-m02) DBG | domain ha-044175-m02 has defined MAC address 52:54:00:84:bb:47 in network mk-ha-044175
	I0805 23:11:24.527274   28839 main.go:141] libmachine: (ha-044175-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:84:bb:47", ip: ""} in network mk-ha-044175: {Iface:virbr1 ExpiryTime:2024-08-06 00:11:13 +0000 UTC Type:0 Mac:52:54:00:84:bb:47 Iaid: IPaddr:192.168.39.112 Prefix:24 Hostname:ha-044175-m02 Clientid:01:52:54:00:84:bb:47}
	I0805 23:11:24.527303   28839 main.go:141] libmachine: (ha-044175-m02) DBG | domain ha-044175-m02 has defined IP address 192.168.39.112 and MAC address 52:54:00:84:bb:47 in network mk-ha-044175
	I0805 23:11:24.527546   28839 main.go:141] libmachine: (ha-044175-m02) Calling .GetSSHPort
	I0805 23:11:24.527754   28839 main.go:141] libmachine: (ha-044175-m02) Calling .GetSSHKeyPath
	I0805 23:11:24.527969   28839 main.go:141] libmachine: (ha-044175-m02) Calling .GetSSHKeyPath
	I0805 23:11:24.528126   28839 main.go:141] libmachine: (ha-044175-m02) Calling .GetSSHUsername
	I0805 23:11:24.528305   28839 main.go:141] libmachine: Using SSH client type: native
	I0805 23:11:24.528504   28839 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.112 22 <nil> <nil>}
	I0805 23:11:24.528526   28839 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0805 23:11:24.798971   28839 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0805 23:11:24.798996   28839 main.go:141] libmachine: Checking connection to Docker...
	I0805 23:11:24.799009   28839 main.go:141] libmachine: (ha-044175-m02) Calling .GetURL
	I0805 23:11:24.800534   28839 main.go:141] libmachine: (ha-044175-m02) DBG | Using libvirt version 6000000
	I0805 23:11:24.802798   28839 main.go:141] libmachine: (ha-044175-m02) DBG | domain ha-044175-m02 has defined MAC address 52:54:00:84:bb:47 in network mk-ha-044175
	I0805 23:11:24.803181   28839 main.go:141] libmachine: (ha-044175-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:84:bb:47", ip: ""} in network mk-ha-044175: {Iface:virbr1 ExpiryTime:2024-08-06 00:11:13 +0000 UTC Type:0 Mac:52:54:00:84:bb:47 Iaid: IPaddr:192.168.39.112 Prefix:24 Hostname:ha-044175-m02 Clientid:01:52:54:00:84:bb:47}
	I0805 23:11:24.803206   28839 main.go:141] libmachine: (ha-044175-m02) DBG | domain ha-044175-m02 has defined IP address 192.168.39.112 and MAC address 52:54:00:84:bb:47 in network mk-ha-044175
	I0805 23:11:24.803378   28839 main.go:141] libmachine: Docker is up and running!
	I0805 23:11:24.803392   28839 main.go:141] libmachine: Reticulating splines...
	I0805 23:11:24.803399   28839 client.go:171] duration metric: took 26.451679001s to LocalClient.Create
	I0805 23:11:24.803423   28839 start.go:167] duration metric: took 26.451737647s to libmachine.API.Create "ha-044175"
	I0805 23:11:24.803435   28839 start.go:293] postStartSetup for "ha-044175-m02" (driver="kvm2")
	I0805 23:11:24.803449   28839 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0805 23:11:24.803471   28839 main.go:141] libmachine: (ha-044175-m02) Calling .DriverName
	I0805 23:11:24.803743   28839 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0805 23:11:24.803766   28839 main.go:141] libmachine: (ha-044175-m02) Calling .GetSSHHostname
	I0805 23:11:24.806266   28839 main.go:141] libmachine: (ha-044175-m02) DBG | domain ha-044175-m02 has defined MAC address 52:54:00:84:bb:47 in network mk-ha-044175
	I0805 23:11:24.806678   28839 main.go:141] libmachine: (ha-044175-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:84:bb:47", ip: ""} in network mk-ha-044175: {Iface:virbr1 ExpiryTime:2024-08-06 00:11:13 +0000 UTC Type:0 Mac:52:54:00:84:bb:47 Iaid: IPaddr:192.168.39.112 Prefix:24 Hostname:ha-044175-m02 Clientid:01:52:54:00:84:bb:47}
	I0805 23:11:24.806706   28839 main.go:141] libmachine: (ha-044175-m02) DBG | domain ha-044175-m02 has defined IP address 192.168.39.112 and MAC address 52:54:00:84:bb:47 in network mk-ha-044175
	I0805 23:11:24.806848   28839 main.go:141] libmachine: (ha-044175-m02) Calling .GetSSHPort
	I0805 23:11:24.807027   28839 main.go:141] libmachine: (ha-044175-m02) Calling .GetSSHKeyPath
	I0805 23:11:24.807171   28839 main.go:141] libmachine: (ha-044175-m02) Calling .GetSSHUsername
	I0805 23:11:24.807342   28839 sshutil.go:53] new ssh client: &{IP:192.168.39.112 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19373-9606/.minikube/machines/ha-044175-m02/id_rsa Username:docker}
	I0805 23:11:24.890641   28839 ssh_runner.go:195] Run: cat /etc/os-release
	I0805 23:11:24.894903   28839 info.go:137] Remote host: Buildroot 2023.02.9
	I0805 23:11:24.894935   28839 filesync.go:126] Scanning /home/jenkins/minikube-integration/19373-9606/.minikube/addons for local assets ...
	I0805 23:11:24.895008   28839 filesync.go:126] Scanning /home/jenkins/minikube-integration/19373-9606/.minikube/files for local assets ...
	I0805 23:11:24.895124   28839 filesync.go:149] local asset: /home/jenkins/minikube-integration/19373-9606/.minikube/files/etc/ssl/certs/167922.pem -> 167922.pem in /etc/ssl/certs
	I0805 23:11:24.895136   28839 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19373-9606/.minikube/files/etc/ssl/certs/167922.pem -> /etc/ssl/certs/167922.pem
	I0805 23:11:24.895242   28839 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0805 23:11:24.904881   28839 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19373-9606/.minikube/files/etc/ssl/certs/167922.pem --> /etc/ssl/certs/167922.pem (1708 bytes)
	I0805 23:11:24.929078   28839 start.go:296] duration metric: took 125.629214ms for postStartSetup
	I0805 23:11:24.929143   28839 main.go:141] libmachine: (ha-044175-m02) Calling .GetConfigRaw
	I0805 23:11:24.929870   28839 main.go:141] libmachine: (ha-044175-m02) Calling .GetIP
	I0805 23:11:24.933181   28839 main.go:141] libmachine: (ha-044175-m02) DBG | domain ha-044175-m02 has defined MAC address 52:54:00:84:bb:47 in network mk-ha-044175
	I0805 23:11:24.933617   28839 main.go:141] libmachine: (ha-044175-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:84:bb:47", ip: ""} in network mk-ha-044175: {Iface:virbr1 ExpiryTime:2024-08-06 00:11:13 +0000 UTC Type:0 Mac:52:54:00:84:bb:47 Iaid: IPaddr:192.168.39.112 Prefix:24 Hostname:ha-044175-m02 Clientid:01:52:54:00:84:bb:47}
	I0805 23:11:24.933650   28839 main.go:141] libmachine: (ha-044175-m02) DBG | domain ha-044175-m02 has defined IP address 192.168.39.112 and MAC address 52:54:00:84:bb:47 in network mk-ha-044175
	I0805 23:11:24.933916   28839 profile.go:143] Saving config to /home/jenkins/minikube-integration/19373-9606/.minikube/profiles/ha-044175/config.json ...
	I0805 23:11:24.934200   28839 start.go:128] duration metric: took 26.601256424s to createHost
	I0805 23:11:24.934240   28839 main.go:141] libmachine: (ha-044175-m02) Calling .GetSSHHostname
	I0805 23:11:24.936916   28839 main.go:141] libmachine: (ha-044175-m02) DBG | domain ha-044175-m02 has defined MAC address 52:54:00:84:bb:47 in network mk-ha-044175
	I0805 23:11:24.937307   28839 main.go:141] libmachine: (ha-044175-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:84:bb:47", ip: ""} in network mk-ha-044175: {Iface:virbr1 ExpiryTime:2024-08-06 00:11:13 +0000 UTC Type:0 Mac:52:54:00:84:bb:47 Iaid: IPaddr:192.168.39.112 Prefix:24 Hostname:ha-044175-m02 Clientid:01:52:54:00:84:bb:47}
	I0805 23:11:24.937334   28839 main.go:141] libmachine: (ha-044175-m02) DBG | domain ha-044175-m02 has defined IP address 192.168.39.112 and MAC address 52:54:00:84:bb:47 in network mk-ha-044175
	I0805 23:11:24.937478   28839 main.go:141] libmachine: (ha-044175-m02) Calling .GetSSHPort
	I0805 23:11:24.937663   28839 main.go:141] libmachine: (ha-044175-m02) Calling .GetSSHKeyPath
	I0805 23:11:24.937851   28839 main.go:141] libmachine: (ha-044175-m02) Calling .GetSSHKeyPath
	I0805 23:11:24.938028   28839 main.go:141] libmachine: (ha-044175-m02) Calling .GetSSHUsername
	I0805 23:11:24.938195   28839 main.go:141] libmachine: Using SSH client type: native
	I0805 23:11:24.938370   28839 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.112 22 <nil> <nil>}
	I0805 23:11:24.938382   28839 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0805 23:11:25.047998   28839 main.go:141] libmachine: SSH cmd err, output: <nil>: 1722899485.026896819
	
	I0805 23:11:25.048024   28839 fix.go:216] guest clock: 1722899485.026896819
	I0805 23:11:25.048036   28839 fix.go:229] Guest: 2024-08-05 23:11:25.026896819 +0000 UTC Remote: 2024-08-05 23:11:24.934222067 +0000 UTC m=+84.250210200 (delta=92.674752ms)
	I0805 23:11:25.048082   28839 fix.go:200] guest clock delta is within tolerance: 92.674752ms
	I0805 23:11:25.048092   28839 start.go:83] releasing machines lock for "ha-044175-m02", held for 26.715325803s
	I0805 23:11:25.048117   28839 main.go:141] libmachine: (ha-044175-m02) Calling .DriverName
	I0805 23:11:25.048440   28839 main.go:141] libmachine: (ha-044175-m02) Calling .GetIP
	I0805 23:11:25.051622   28839 main.go:141] libmachine: (ha-044175-m02) DBG | domain ha-044175-m02 has defined MAC address 52:54:00:84:bb:47 in network mk-ha-044175
	I0805 23:11:25.052002   28839 main.go:141] libmachine: (ha-044175-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:84:bb:47", ip: ""} in network mk-ha-044175: {Iface:virbr1 ExpiryTime:2024-08-06 00:11:13 +0000 UTC Type:0 Mac:52:54:00:84:bb:47 Iaid: IPaddr:192.168.39.112 Prefix:24 Hostname:ha-044175-m02 Clientid:01:52:54:00:84:bb:47}
	I0805 23:11:25.052027   28839 main.go:141] libmachine: (ha-044175-m02) DBG | domain ha-044175-m02 has defined IP address 192.168.39.112 and MAC address 52:54:00:84:bb:47 in network mk-ha-044175
	I0805 23:11:25.054434   28839 out.go:177] * Found network options:
	I0805 23:11:25.055807   28839 out.go:177]   - NO_PROXY=192.168.39.57
	W0805 23:11:25.057028   28839 proxy.go:119] fail to check proxy env: Error ip not in block
	I0805 23:11:25.057057   28839 main.go:141] libmachine: (ha-044175-m02) Calling .DriverName
	I0805 23:11:25.057686   28839 main.go:141] libmachine: (ha-044175-m02) Calling .DriverName
	I0805 23:11:25.057893   28839 main.go:141] libmachine: (ha-044175-m02) Calling .DriverName
	I0805 23:11:25.057955   28839 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0805 23:11:25.058002   28839 main.go:141] libmachine: (ha-044175-m02) Calling .GetSSHHostname
	W0805 23:11:25.058072   28839 proxy.go:119] fail to check proxy env: Error ip not in block
	I0805 23:11:25.058136   28839 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0805 23:11:25.058149   28839 main.go:141] libmachine: (ha-044175-m02) Calling .GetSSHHostname
	I0805 23:11:25.060690   28839 main.go:141] libmachine: (ha-044175-m02) DBG | domain ha-044175-m02 has defined MAC address 52:54:00:84:bb:47 in network mk-ha-044175
	I0805 23:11:25.060938   28839 main.go:141] libmachine: (ha-044175-m02) DBG | domain ha-044175-m02 has defined MAC address 52:54:00:84:bb:47 in network mk-ha-044175
	I0805 23:11:25.061131   28839 main.go:141] libmachine: (ha-044175-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:84:bb:47", ip: ""} in network mk-ha-044175: {Iface:virbr1 ExpiryTime:2024-08-06 00:11:13 +0000 UTC Type:0 Mac:52:54:00:84:bb:47 Iaid: IPaddr:192.168.39.112 Prefix:24 Hostname:ha-044175-m02 Clientid:01:52:54:00:84:bb:47}
	I0805 23:11:25.061156   28839 main.go:141] libmachine: (ha-044175-m02) DBG | domain ha-044175-m02 has defined IP address 192.168.39.112 and MAC address 52:54:00:84:bb:47 in network mk-ha-044175
	I0805 23:11:25.061313   28839 main.go:141] libmachine: (ha-044175-m02) Calling .GetSSHPort
	I0805 23:11:25.061437   28839 main.go:141] libmachine: (ha-044175-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:84:bb:47", ip: ""} in network mk-ha-044175: {Iface:virbr1 ExpiryTime:2024-08-06 00:11:13 +0000 UTC Type:0 Mac:52:54:00:84:bb:47 Iaid: IPaddr:192.168.39.112 Prefix:24 Hostname:ha-044175-m02 Clientid:01:52:54:00:84:bb:47}
	I0805 23:11:25.061460   28839 main.go:141] libmachine: (ha-044175-m02) DBG | domain ha-044175-m02 has defined IP address 192.168.39.112 and MAC address 52:54:00:84:bb:47 in network mk-ha-044175
	I0805 23:11:25.061476   28839 main.go:141] libmachine: (ha-044175-m02) Calling .GetSSHKeyPath
	I0805 23:11:25.061626   28839 main.go:141] libmachine: (ha-044175-m02) Calling .GetSSHPort
	I0805 23:11:25.061632   28839 main.go:141] libmachine: (ha-044175-m02) Calling .GetSSHUsername
	I0805 23:11:25.061803   28839 main.go:141] libmachine: (ha-044175-m02) Calling .GetSSHKeyPath
	I0805 23:11:25.061794   28839 sshutil.go:53] new ssh client: &{IP:192.168.39.112 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19373-9606/.minikube/machines/ha-044175-m02/id_rsa Username:docker}
	I0805 23:11:25.061951   28839 main.go:141] libmachine: (ha-044175-m02) Calling .GetSSHUsername
	I0805 23:11:25.062141   28839 sshutil.go:53] new ssh client: &{IP:192.168.39.112 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19373-9606/.minikube/machines/ha-044175-m02/id_rsa Username:docker}
	I0805 23:11:25.299725   28839 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0805 23:11:25.306174   28839 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0805 23:11:25.306242   28839 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0805 23:11:25.322685   28839 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0805 23:11:25.322706   28839 start.go:495] detecting cgroup driver to use...
	I0805 23:11:25.322785   28839 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0805 23:11:25.339357   28839 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0805 23:11:25.354693   28839 docker.go:217] disabling cri-docker service (if available) ...
	I0805 23:11:25.354757   28839 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0805 23:11:25.369378   28839 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0805 23:11:25.384906   28839 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0805 23:11:25.515594   28839 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0805 23:11:25.686718   28839 docker.go:233] disabling docker service ...
	I0805 23:11:25.686825   28839 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0805 23:11:25.702675   28839 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0805 23:11:25.716283   28839 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0805 23:11:25.850322   28839 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0805 23:11:25.974241   28839 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0805 23:11:25.989237   28839 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0805 23:11:26.008400   28839 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I0805 23:11:26.008467   28839 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0805 23:11:26.019119   28839 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0805 23:11:26.019203   28839 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0805 23:11:26.030550   28839 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0805 23:11:26.042068   28839 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0805 23:11:26.052855   28839 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0805 23:11:26.063581   28839 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0805 23:11:26.073993   28839 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0805 23:11:26.090786   28839 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0805 23:11:26.102178   28839 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0805 23:11:26.113041   28839 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0805 23:11:26.113101   28839 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0805 23:11:26.128071   28839 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0805 23:11:26.139676   28839 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0805 23:11:26.267911   28839 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0805 23:11:26.406809   28839 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0805 23:11:26.406876   28839 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0805 23:11:26.411835   28839 start.go:563] Will wait 60s for crictl version
	I0805 23:11:26.411902   28839 ssh_runner.go:195] Run: which crictl
	I0805 23:11:26.416003   28839 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0805 23:11:26.455739   28839 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0805 23:11:26.455808   28839 ssh_runner.go:195] Run: crio --version
	I0805 23:11:26.486871   28839 ssh_runner.go:195] Run: crio --version
	I0805 23:11:26.518231   28839 out.go:177] * Preparing Kubernetes v1.30.3 on CRI-O 1.29.1 ...
	I0805 23:11:26.519697   28839 out.go:177]   - env NO_PROXY=192.168.39.57
	I0805 23:11:26.521151   28839 main.go:141] libmachine: (ha-044175-m02) Calling .GetIP
	I0805 23:11:26.524244   28839 main.go:141] libmachine: (ha-044175-m02) DBG | domain ha-044175-m02 has defined MAC address 52:54:00:84:bb:47 in network mk-ha-044175
	I0805 23:11:26.524712   28839 main.go:141] libmachine: (ha-044175-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:84:bb:47", ip: ""} in network mk-ha-044175: {Iface:virbr1 ExpiryTime:2024-08-06 00:11:13 +0000 UTC Type:0 Mac:52:54:00:84:bb:47 Iaid: IPaddr:192.168.39.112 Prefix:24 Hostname:ha-044175-m02 Clientid:01:52:54:00:84:bb:47}
	I0805 23:11:26.524738   28839 main.go:141] libmachine: (ha-044175-m02) DBG | domain ha-044175-m02 has defined IP address 192.168.39.112 and MAC address 52:54:00:84:bb:47 in network mk-ha-044175
	I0805 23:11:26.524958   28839 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0805 23:11:26.529501   28839 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0805 23:11:26.542765   28839 mustload.go:65] Loading cluster: ha-044175
	I0805 23:11:26.542991   28839 config.go:182] Loaded profile config "ha-044175": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0805 23:11:26.543340   28839 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0805 23:11:26.543377   28839 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0805 23:11:26.557439   28839 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43339
	I0805 23:11:26.557872   28839 main.go:141] libmachine: () Calling .GetVersion
	I0805 23:11:26.558273   28839 main.go:141] libmachine: Using API Version  1
	I0805 23:11:26.558294   28839 main.go:141] libmachine: () Calling .SetConfigRaw
	I0805 23:11:26.558592   28839 main.go:141] libmachine: () Calling .GetMachineName
	I0805 23:11:26.558775   28839 main.go:141] libmachine: (ha-044175) Calling .GetState
	I0805 23:11:26.560260   28839 host.go:66] Checking if "ha-044175" exists ...
	I0805 23:11:26.560572   28839 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0805 23:11:26.560616   28839 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0805 23:11:26.575601   28839 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37281
	I0805 23:11:26.575998   28839 main.go:141] libmachine: () Calling .GetVersion
	I0805 23:11:26.576408   28839 main.go:141] libmachine: Using API Version  1
	I0805 23:11:26.576432   28839 main.go:141] libmachine: () Calling .SetConfigRaw
	I0805 23:11:26.576748   28839 main.go:141] libmachine: () Calling .GetMachineName
	I0805 23:11:26.576917   28839 main.go:141] libmachine: (ha-044175) Calling .DriverName
	I0805 23:11:26.577091   28839 certs.go:68] Setting up /home/jenkins/minikube-integration/19373-9606/.minikube/profiles/ha-044175 for IP: 192.168.39.112
	I0805 23:11:26.577107   28839 certs.go:194] generating shared ca certs ...
	I0805 23:11:26.577126   28839 certs.go:226] acquiring lock for ca certs: {Name:mkf35a042c1656d191f542eee7fa087aad4d29d8 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0805 23:11:26.577263   28839 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19373-9606/.minikube/ca.key
	I0805 23:11:26.577301   28839 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19373-9606/.minikube/proxy-client-ca.key
	I0805 23:11:26.577310   28839 certs.go:256] generating profile certs ...
	I0805 23:11:26.577379   28839 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19373-9606/.minikube/profiles/ha-044175/client.key
	I0805 23:11:26.577402   28839 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/19373-9606/.minikube/profiles/ha-044175/apiserver.key.ad18f62e
	I0805 23:11:26.577418   28839 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19373-9606/.minikube/profiles/ha-044175/apiserver.crt.ad18f62e with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.39.57 192.168.39.112 192.168.39.254]
	I0805 23:11:26.637767   28839 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19373-9606/.minikube/profiles/ha-044175/apiserver.crt.ad18f62e ...
	I0805 23:11:26.637796   28839 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19373-9606/.minikube/profiles/ha-044175/apiserver.crt.ad18f62e: {Name:mkad1ee795bff5c5d74c9f4f3dd96dcf784d053b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0805 23:11:26.637952   28839 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19373-9606/.minikube/profiles/ha-044175/apiserver.key.ad18f62e ...
	I0805 23:11:26.637964   28839 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19373-9606/.minikube/profiles/ha-044175/apiserver.key.ad18f62e: {Name:mk035a446b2e7691a651da6b4b78721fdb2a6d0c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0805 23:11:26.638029   28839 certs.go:381] copying /home/jenkins/minikube-integration/19373-9606/.minikube/profiles/ha-044175/apiserver.crt.ad18f62e -> /home/jenkins/minikube-integration/19373-9606/.minikube/profiles/ha-044175/apiserver.crt
	I0805 23:11:26.638159   28839 certs.go:385] copying /home/jenkins/minikube-integration/19373-9606/.minikube/profiles/ha-044175/apiserver.key.ad18f62e -> /home/jenkins/minikube-integration/19373-9606/.minikube/profiles/ha-044175/apiserver.key
	I0805 23:11:26.638287   28839 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19373-9606/.minikube/profiles/ha-044175/proxy-client.key
	I0805 23:11:26.638301   28839 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19373-9606/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I0805 23:11:26.638314   28839 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19373-9606/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I0805 23:11:26.638327   28839 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19373-9606/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0805 23:11:26.638339   28839 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19373-9606/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0805 23:11:26.638352   28839 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19373-9606/.minikube/profiles/ha-044175/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0805 23:11:26.638365   28839 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19373-9606/.minikube/profiles/ha-044175/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0805 23:11:26.638376   28839 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19373-9606/.minikube/profiles/ha-044175/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0805 23:11:26.638388   28839 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19373-9606/.minikube/profiles/ha-044175/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0805 23:11:26.638436   28839 certs.go:484] found cert: /home/jenkins/minikube-integration/19373-9606/.minikube/certs/16792.pem (1338 bytes)
	W0805 23:11:26.638475   28839 certs.go:480] ignoring /home/jenkins/minikube-integration/19373-9606/.minikube/certs/16792_empty.pem, impossibly tiny 0 bytes
	I0805 23:11:26.638483   28839 certs.go:484] found cert: /home/jenkins/minikube-integration/19373-9606/.minikube/certs/ca-key.pem (1679 bytes)
	I0805 23:11:26.638513   28839 certs.go:484] found cert: /home/jenkins/minikube-integration/19373-9606/.minikube/certs/ca.pem (1082 bytes)
	I0805 23:11:26.638543   28839 certs.go:484] found cert: /home/jenkins/minikube-integration/19373-9606/.minikube/certs/cert.pem (1123 bytes)
	I0805 23:11:26.638580   28839 certs.go:484] found cert: /home/jenkins/minikube-integration/19373-9606/.minikube/certs/key.pem (1679 bytes)
	I0805 23:11:26.638635   28839 certs.go:484] found cert: /home/jenkins/minikube-integration/19373-9606/.minikube/files/etc/ssl/certs/167922.pem (1708 bytes)
	I0805 23:11:26.638673   28839 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19373-9606/.minikube/files/etc/ssl/certs/167922.pem -> /usr/share/ca-certificates/167922.pem
	I0805 23:11:26.638696   28839 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19373-9606/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0805 23:11:26.638715   28839 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19373-9606/.minikube/certs/16792.pem -> /usr/share/ca-certificates/16792.pem
	I0805 23:11:26.638759   28839 main.go:141] libmachine: (ha-044175) Calling .GetSSHHostname
	I0805 23:11:26.641680   28839 main.go:141] libmachine: (ha-044175) DBG | domain ha-044175 has defined MAC address 52:54:00:d0:5f:e4 in network mk-ha-044175
	I0805 23:11:26.642048   28839 main.go:141] libmachine: (ha-044175) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d0:5f:e4", ip: ""} in network mk-ha-044175: {Iface:virbr1 ExpiryTime:2024-08-06 00:10:15 +0000 UTC Type:0 Mac:52:54:00:d0:5f:e4 Iaid: IPaddr:192.168.39.57 Prefix:24 Hostname:ha-044175 Clientid:01:52:54:00:d0:5f:e4}
	I0805 23:11:26.642082   28839 main.go:141] libmachine: (ha-044175) DBG | domain ha-044175 has defined IP address 192.168.39.57 and MAC address 52:54:00:d0:5f:e4 in network mk-ha-044175
	I0805 23:11:26.642240   28839 main.go:141] libmachine: (ha-044175) Calling .GetSSHPort
	I0805 23:11:26.642460   28839 main.go:141] libmachine: (ha-044175) Calling .GetSSHKeyPath
	I0805 23:11:26.642609   28839 main.go:141] libmachine: (ha-044175) Calling .GetSSHUsername
	I0805 23:11:26.642730   28839 sshutil.go:53] new ssh client: &{IP:192.168.39.57 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19373-9606/.minikube/machines/ha-044175/id_rsa Username:docker}
	I0805 23:11:26.711456   28839 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/sa.pub
	I0805 23:11:26.718679   28839 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.pub --> memory (451 bytes)
	I0805 23:11:26.739835   28839 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/sa.key
	I0805 23:11:26.744682   28839 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.key --> memory (1675 bytes)
	I0805 23:11:26.757766   28839 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/front-proxy-ca.crt
	I0805 23:11:26.762276   28839 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.crt --> memory (1123 bytes)
	I0805 23:11:26.773825   28839 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/front-proxy-ca.key
	I0805 23:11:26.778500   28839 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.key --> memory (1675 bytes)
	I0805 23:11:26.789799   28839 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/etcd/ca.crt
	I0805 23:11:26.794170   28839 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.crt --> memory (1094 bytes)
	I0805 23:11:26.806013   28839 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/etcd/ca.key
	I0805 23:11:26.810492   28839 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.key --> memory (1679 bytes)
	I0805 23:11:26.821952   28839 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19373-9606/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0805 23:11:26.848184   28839 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19373-9606/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0805 23:11:26.872528   28839 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19373-9606/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0805 23:11:26.897402   28839 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19373-9606/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0805 23:11:26.925593   28839 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19373-9606/.minikube/profiles/ha-044175/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1436 bytes)
	I0805 23:11:26.953348   28839 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19373-9606/.minikube/profiles/ha-044175/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0805 23:11:26.977810   28839 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19373-9606/.minikube/profiles/ha-044175/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0805 23:11:27.002673   28839 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19373-9606/.minikube/profiles/ha-044175/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0805 23:11:27.027853   28839 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19373-9606/.minikube/files/etc/ssl/certs/167922.pem --> /usr/share/ca-certificates/167922.pem (1708 bytes)
	I0805 23:11:27.052873   28839 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19373-9606/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0805 23:11:27.077190   28839 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19373-9606/.minikube/certs/16792.pem --> /usr/share/ca-certificates/16792.pem (1338 bytes)
	I0805 23:11:27.103532   28839 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.pub (451 bytes)
	I0805 23:11:27.120876   28839 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.key (1675 bytes)
	I0805 23:11:27.138371   28839 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.crt (1123 bytes)
	I0805 23:11:27.155459   28839 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.key (1675 bytes)
	I0805 23:11:27.172538   28839 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.crt (1094 bytes)
	I0805 23:11:27.189842   28839 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.key (1679 bytes)
	I0805 23:11:27.207488   28839 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (744 bytes)
	I0805 23:11:27.224852   28839 ssh_runner.go:195] Run: openssl version
	I0805 23:11:27.230683   28839 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/167922.pem && ln -fs /usr/share/ca-certificates/167922.pem /etc/ssl/certs/167922.pem"
	I0805 23:11:27.241747   28839 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/167922.pem
	I0805 23:11:27.246552   28839 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Aug  5 23:03 /usr/share/ca-certificates/167922.pem
	I0805 23:11:27.246620   28839 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/167922.pem
	I0805 23:11:27.252853   28839 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/167922.pem /etc/ssl/certs/3ec20f2e.0"
	I0805 23:11:27.264243   28839 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0805 23:11:27.275787   28839 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0805 23:11:27.280638   28839 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Aug  5 22:50 /usr/share/ca-certificates/minikubeCA.pem
	I0805 23:11:27.280702   28839 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0805 23:11:27.286790   28839 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0805 23:11:27.298786   28839 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/16792.pem && ln -fs /usr/share/ca-certificates/16792.pem /etc/ssl/certs/16792.pem"
	I0805 23:11:27.310297   28839 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/16792.pem
	I0805 23:11:27.315450   28839 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Aug  5 23:03 /usr/share/ca-certificates/16792.pem
	I0805 23:11:27.315514   28839 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/16792.pem
	I0805 23:11:27.321473   28839 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/16792.pem /etc/ssl/certs/51391683.0"
	I0805 23:11:27.332363   28839 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0805 23:11:27.336935   28839 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0805 23:11:27.336993   28839 kubeadm.go:934] updating node {m02 192.168.39.112 8443 v1.30.3 crio true true} ...
	I0805 23:11:27.337095   28839 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.30.3/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=ha-044175-m02 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.112
	
	[Install]
	 config:
	{KubernetesVersion:v1.30.3 ClusterName:ha-044175 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0805 23:11:27.337124   28839 kube-vip.go:115] generating kube-vip config ...
	I0805 23:11:27.337161   28839 ssh_runner.go:195] Run: sudo sh -c "modprobe --all ip_vs ip_vs_rr ip_vs_wrr ip_vs_sh nf_conntrack"
	I0805 23:11:27.354765   28839 kube-vip.go:167] auto-enabling control-plane load-balancing in kube-vip
	I0805 23:11:27.354834   28839 kube-vip.go:137] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_nodename
	      valueFrom:
	        fieldRef:
	          fieldPath: spec.nodeName
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 192.168.39.254
	    - name: prometheus_server
	      value: :2112
	    - name : lb_enable
	      value: "true"
	    - name: lb_port
	      value: "8443"
	    image: ghcr.io/kube-vip/kube-vip:v0.8.0
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/admin.conf"
	    name: kubeconfig
	status: {}
	I0805 23:11:27.354891   28839 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.30.3
	I0805 23:11:27.365652   28839 binaries.go:47] Didn't find k8s binaries: sudo ls /var/lib/minikube/binaries/v1.30.3: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/binaries/v1.30.3': No such file or directory
	
	Initiating transfer...
	I0805 23:11:27.365704   28839 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/binaries/v1.30.3
	I0805 23:11:27.375939   28839 download.go:107] Downloading: https://dl.k8s.io/release/v1.30.3/bin/linux/amd64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.30.3/bin/linux/amd64/kubeadm.sha256 -> /home/jenkins/minikube-integration/19373-9606/.minikube/cache/linux/amd64/v1.30.3/kubeadm
	I0805 23:11:27.375939   28839 download.go:107] Downloading: https://dl.k8s.io/release/v1.30.3/bin/linux/amd64/kubelet?checksum=file:https://dl.k8s.io/release/v1.30.3/bin/linux/amd64/kubelet.sha256 -> /home/jenkins/minikube-integration/19373-9606/.minikube/cache/linux/amd64/v1.30.3/kubelet
	I0805 23:11:27.375940   28839 binary.go:74] Not caching binary, using https://dl.k8s.io/release/v1.30.3/bin/linux/amd64/kubectl?checksum=file:https://dl.k8s.io/release/v1.30.3/bin/linux/amd64/kubectl.sha256
	I0805 23:11:27.376110   28839 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19373-9606/.minikube/cache/linux/amd64/v1.30.3/kubectl -> /var/lib/minikube/binaries/v1.30.3/kubectl
	I0805 23:11:27.376208   28839 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.30.3/kubectl
	I0805 23:11:27.382613   28839 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.30.3/kubectl: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.30.3/kubectl: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.30.3/kubectl': No such file or directory
	I0805 23:11:27.382649   28839 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19373-9606/.minikube/cache/linux/amd64/v1.30.3/kubectl --> /var/lib/minikube/binaries/v1.30.3/kubectl (51454104 bytes)
	I0805 23:11:28.470234   28839 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19373-9606/.minikube/cache/linux/amd64/v1.30.3/kubeadm -> /var/lib/minikube/binaries/v1.30.3/kubeadm
	I0805 23:11:28.470313   28839 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.30.3/kubeadm
	I0805 23:11:28.475604   28839 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.30.3/kubeadm: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.30.3/kubeadm: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.30.3/kubeadm': No such file or directory
	I0805 23:11:28.475649   28839 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19373-9606/.minikube/cache/linux/amd64/v1.30.3/kubeadm --> /var/lib/minikube/binaries/v1.30.3/kubeadm (50249880 bytes)
	I0805 23:11:28.917910   28839 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0805 23:11:28.932927   28839 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19373-9606/.minikube/cache/linux/amd64/v1.30.3/kubelet -> /var/lib/minikube/binaries/v1.30.3/kubelet
	I0805 23:11:28.933011   28839 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.30.3/kubelet
	I0805 23:11:28.937444   28839 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.30.3/kubelet: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.30.3/kubelet: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.30.3/kubelet': No such file or directory
	I0805 23:11:28.937481   28839 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19373-9606/.minikube/cache/linux/amd64/v1.30.3/kubelet --> /var/lib/minikube/binaries/v1.30.3/kubelet (100125080 bytes)
	I0805 23:11:29.356509   28839 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /etc/kubernetes/manifests
	I0805 23:11:29.366545   28839 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (313 bytes)
	I0805 23:11:29.383465   28839 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0805 23:11:29.400422   28839 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1441 bytes)
	I0805 23:11:29.417598   28839 ssh_runner.go:195] Run: grep 192.168.39.254	control-plane.minikube.internal$ /etc/hosts
	I0805 23:11:29.422348   28839 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.254	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0805 23:11:29.435838   28839 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0805 23:11:29.557695   28839 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0805 23:11:29.576202   28839 host.go:66] Checking if "ha-044175" exists ...
	I0805 23:11:29.576670   28839 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0805 23:11:29.576714   28839 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0805 23:11:29.591867   28839 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45075
	I0805 23:11:29.592430   28839 main.go:141] libmachine: () Calling .GetVersion
	I0805 23:11:29.592950   28839 main.go:141] libmachine: Using API Version  1
	I0805 23:11:29.592969   28839 main.go:141] libmachine: () Calling .SetConfigRaw
	I0805 23:11:29.593276   28839 main.go:141] libmachine: () Calling .GetMachineName
	I0805 23:11:29.593479   28839 main.go:141] libmachine: (ha-044175) Calling .DriverName
	I0805 23:11:29.593607   28839 start.go:317] joinCluster: &{Name:ha-044175 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19339/minikube-v1.33.1-1722248113-19339-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.3 Cluster
Name:ha-044175 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.57 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.112 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration
:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0805 23:11:29.593717   28839 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm token create --print-join-command --ttl=0"
	I0805 23:11:29.593739   28839 main.go:141] libmachine: (ha-044175) Calling .GetSSHHostname
	I0805 23:11:29.597339   28839 main.go:141] libmachine: (ha-044175) DBG | domain ha-044175 has defined MAC address 52:54:00:d0:5f:e4 in network mk-ha-044175
	I0805 23:11:29.597799   28839 main.go:141] libmachine: (ha-044175) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d0:5f:e4", ip: ""} in network mk-ha-044175: {Iface:virbr1 ExpiryTime:2024-08-06 00:10:15 +0000 UTC Type:0 Mac:52:54:00:d0:5f:e4 Iaid: IPaddr:192.168.39.57 Prefix:24 Hostname:ha-044175 Clientid:01:52:54:00:d0:5f:e4}
	I0805 23:11:29.597825   28839 main.go:141] libmachine: (ha-044175) DBG | domain ha-044175 has defined IP address 192.168.39.57 and MAC address 52:54:00:d0:5f:e4 in network mk-ha-044175
	I0805 23:11:29.598007   28839 main.go:141] libmachine: (ha-044175) Calling .GetSSHPort
	I0805 23:11:29.598215   28839 main.go:141] libmachine: (ha-044175) Calling .GetSSHKeyPath
	I0805 23:11:29.598389   28839 main.go:141] libmachine: (ha-044175) Calling .GetSSHUsername
	I0805 23:11:29.598524   28839 sshutil.go:53] new ssh client: &{IP:192.168.39.57 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19373-9606/.minikube/machines/ha-044175/id_rsa Username:docker}
	I0805 23:11:29.760365   28839 start.go:343] trying to join control-plane node "m02" to cluster: &{Name:m02 IP:192.168.39.112 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0805 23:11:29.760406   28839 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm join control-plane.minikube.internal:8443 --token 9e2ce2.maogyyg7kfbeyj3n --discovery-token-ca-cert-hash sha256:80c3f4a7caafd825f47d5f536053424d1d775e8da247cc5594b6b717e711fcd3 --ignore-preflight-errors=all --cri-socket unix:///var/run/crio/crio.sock --node-name=ha-044175-m02 --control-plane --apiserver-advertise-address=192.168.39.112 --apiserver-bind-port=8443"
	I0805 23:11:51.894631   28839 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm join control-plane.minikube.internal:8443 --token 9e2ce2.maogyyg7kfbeyj3n --discovery-token-ca-cert-hash sha256:80c3f4a7caafd825f47d5f536053424d1d775e8da247cc5594b6b717e711fcd3 --ignore-preflight-errors=all --cri-socket unix:///var/run/crio/crio.sock --node-name=ha-044175-m02 --control-plane --apiserver-advertise-address=192.168.39.112 --apiserver-bind-port=8443": (22.134160621s)
	I0805 23:11:51.894696   28839 ssh_runner.go:195] Run: /bin/bash -c "sudo systemctl daemon-reload && sudo systemctl enable kubelet && sudo systemctl start kubelet"
	I0805 23:11:52.474720   28839 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes ha-044175-m02 minikube.k8s.io/updated_at=2024_08_05T23_11_52_0700 minikube.k8s.io/version=v1.33.1 minikube.k8s.io/commit=a179f531dd2dbe55e0d6074abcbc378280f91bb4 minikube.k8s.io/name=ha-044175 minikube.k8s.io/primary=false
	I0805 23:11:52.614587   28839 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig taint nodes ha-044175-m02 node-role.kubernetes.io/control-plane:NoSchedule-
	I0805 23:11:52.799520   28839 start.go:319] duration metric: took 23.205908074s to joinCluster
	I0805 23:11:52.799617   28839 start.go:235] Will wait 6m0s for node &{Name:m02 IP:192.168.39.112 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0805 23:11:52.799937   28839 config.go:182] Loaded profile config "ha-044175": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0805 23:11:52.801375   28839 out.go:177] * Verifying Kubernetes components...
	I0805 23:11:52.802951   28839 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0805 23:11:53.098436   28839 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0805 23:11:53.120645   28839 loader.go:395] Config loaded from file:  /home/jenkins/minikube-integration/19373-9606/kubeconfig
	I0805 23:11:53.120920   28839 kapi.go:59] client config for ha-044175: &rest.Config{Host:"https://192.168.39.254:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/19373-9606/.minikube/profiles/ha-044175/client.crt", KeyFile:"/home/jenkins/minikube-integration/19373-9606/.minikube/profiles/ha-044175/client.key", CAFile:"/home/jenkins/minikube-integration/19373-9606/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(nil)},
UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1d02940), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	W0805 23:11:53.120985   28839 kubeadm.go:483] Overriding stale ClientConfig host https://192.168.39.254:8443 with https://192.168.39.57:8443
	I0805 23:11:53.121174   28839 node_ready.go:35] waiting up to 6m0s for node "ha-044175-m02" to be "Ready" ...
	I0805 23:11:53.121256   28839 round_trippers.go:463] GET https://192.168.39.57:8443/api/v1/nodes/ha-044175-m02
	I0805 23:11:53.121263   28839 round_trippers.go:469] Request Headers:
	I0805 23:11:53.121272   28839 round_trippers.go:473]     Accept: application/json, */*
	I0805 23:11:53.121275   28839 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0805 23:11:53.148497   28839 round_trippers.go:574] Response Status: 200 OK in 27 milliseconds
	I0805 23:11:53.621949   28839 round_trippers.go:463] GET https://192.168.39.57:8443/api/v1/nodes/ha-044175-m02
	I0805 23:11:53.621974   28839 round_trippers.go:469] Request Headers:
	I0805 23:11:53.621986   28839 round_trippers.go:473]     Accept: application/json, */*
	I0805 23:11:53.621992   28839 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0805 23:11:53.627851   28839 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0805 23:11:54.121485   28839 round_trippers.go:463] GET https://192.168.39.57:8443/api/v1/nodes/ha-044175-m02
	I0805 23:11:54.121505   28839 round_trippers.go:469] Request Headers:
	I0805 23:11:54.121513   28839 round_trippers.go:473]     Accept: application/json, */*
	I0805 23:11:54.121517   28839 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0805 23:11:54.125287   28839 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0805 23:11:54.621715   28839 round_trippers.go:463] GET https://192.168.39.57:8443/api/v1/nodes/ha-044175-m02
	I0805 23:11:54.621738   28839 round_trippers.go:469] Request Headers:
	I0805 23:11:54.621746   28839 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0805 23:11:54.621751   28839 round_trippers.go:473]     Accept: application/json, */*
	I0805 23:11:54.626273   28839 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0805 23:11:55.121530   28839 round_trippers.go:463] GET https://192.168.39.57:8443/api/v1/nodes/ha-044175-m02
	I0805 23:11:55.121553   28839 round_trippers.go:469] Request Headers:
	I0805 23:11:55.121562   28839 round_trippers.go:473]     Accept: application/json, */*
	I0805 23:11:55.121568   28839 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0805 23:11:55.124549   28839 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0805 23:11:55.125343   28839 node_ready.go:53] node "ha-044175-m02" has status "Ready":"False"
	I0805 23:11:55.621745   28839 round_trippers.go:463] GET https://192.168.39.57:8443/api/v1/nodes/ha-044175-m02
	I0805 23:11:55.621769   28839 round_trippers.go:469] Request Headers:
	I0805 23:11:55.621776   28839 round_trippers.go:473]     Accept: application/json, */*
	I0805 23:11:55.621779   28839 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0805 23:11:55.624905   28839 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0805 23:11:56.121489   28839 round_trippers.go:463] GET https://192.168.39.57:8443/api/v1/nodes/ha-044175-m02
	I0805 23:11:56.121509   28839 round_trippers.go:469] Request Headers:
	I0805 23:11:56.121515   28839 round_trippers.go:473]     Accept: application/json, */*
	I0805 23:11:56.121519   28839 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0805 23:11:56.124629   28839 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0805 23:11:56.622128   28839 round_trippers.go:463] GET https://192.168.39.57:8443/api/v1/nodes/ha-044175-m02
	I0805 23:11:56.622149   28839 round_trippers.go:469] Request Headers:
	I0805 23:11:56.622157   28839 round_trippers.go:473]     Accept: application/json, */*
	I0805 23:11:56.622161   28839 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0805 23:11:56.625675   28839 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0805 23:11:57.122278   28839 round_trippers.go:463] GET https://192.168.39.57:8443/api/v1/nodes/ha-044175-m02
	I0805 23:11:57.122306   28839 round_trippers.go:469] Request Headers:
	I0805 23:11:57.122316   28839 round_trippers.go:473]     Accept: application/json, */*
	I0805 23:11:57.122329   28839 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0805 23:11:57.125550   28839 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0805 23:11:57.126534   28839 node_ready.go:53] node "ha-044175-m02" has status "Ready":"False"
	I0805 23:11:57.622356   28839 round_trippers.go:463] GET https://192.168.39.57:8443/api/v1/nodes/ha-044175-m02
	I0805 23:11:57.622381   28839 round_trippers.go:469] Request Headers:
	I0805 23:11:57.622389   28839 round_trippers.go:473]     Accept: application/json, */*
	I0805 23:11:57.622392   28839 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0805 23:11:57.625634   28839 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0805 23:11:58.121512   28839 round_trippers.go:463] GET https://192.168.39.57:8443/api/v1/nodes/ha-044175-m02
	I0805 23:11:58.121533   28839 round_trippers.go:469] Request Headers:
	I0805 23:11:58.121540   28839 round_trippers.go:473]     Accept: application/json, */*
	I0805 23:11:58.121546   28839 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0805 23:11:58.125008   28839 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0805 23:11:58.622188   28839 round_trippers.go:463] GET https://192.168.39.57:8443/api/v1/nodes/ha-044175-m02
	I0805 23:11:58.622213   28839 round_trippers.go:469] Request Headers:
	I0805 23:11:58.622221   28839 round_trippers.go:473]     Accept: application/json, */*
	I0805 23:11:58.622226   28839 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0805 23:11:58.627086   28839 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0805 23:11:59.121801   28839 round_trippers.go:463] GET https://192.168.39.57:8443/api/v1/nodes/ha-044175-m02
	I0805 23:11:59.121829   28839 round_trippers.go:469] Request Headers:
	I0805 23:11:59.121837   28839 round_trippers.go:473]     Accept: application/json, */*
	I0805 23:11:59.121843   28839 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0805 23:11:59.125563   28839 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0805 23:11:59.621918   28839 round_trippers.go:463] GET https://192.168.39.57:8443/api/v1/nodes/ha-044175-m02
	I0805 23:11:59.621939   28839 round_trippers.go:469] Request Headers:
	I0805 23:11:59.621946   28839 round_trippers.go:473]     Accept: application/json, */*
	I0805 23:11:59.621951   28839 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0805 23:11:59.625132   28839 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0805 23:11:59.626012   28839 node_ready.go:53] node "ha-044175-m02" has status "Ready":"False"
	I0805 23:12:00.122129   28839 round_trippers.go:463] GET https://192.168.39.57:8443/api/v1/nodes/ha-044175-m02
	I0805 23:12:00.122149   28839 round_trippers.go:469] Request Headers:
	I0805 23:12:00.122156   28839 round_trippers.go:473]     Accept: application/json, */*
	I0805 23:12:00.122160   28839 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0805 23:12:00.126570   28839 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0805 23:12:00.621605   28839 round_trippers.go:463] GET https://192.168.39.57:8443/api/v1/nodes/ha-044175-m02
	I0805 23:12:00.621631   28839 round_trippers.go:469] Request Headers:
	I0805 23:12:00.621644   28839 round_trippers.go:473]     Accept: application/json, */*
	I0805 23:12:00.621651   28839 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0805 23:12:00.625040   28839 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0805 23:12:01.121793   28839 round_trippers.go:463] GET https://192.168.39.57:8443/api/v1/nodes/ha-044175-m02
	I0805 23:12:01.121814   28839 round_trippers.go:469] Request Headers:
	I0805 23:12:01.121822   28839 round_trippers.go:473]     Accept: application/json, */*
	I0805 23:12:01.121827   28839 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0805 23:12:01.125732   28839 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0805 23:12:01.621884   28839 round_trippers.go:463] GET https://192.168.39.57:8443/api/v1/nodes/ha-044175-m02
	I0805 23:12:01.621911   28839 round_trippers.go:469] Request Headers:
	I0805 23:12:01.621922   28839 round_trippers.go:473]     Accept: application/json, */*
	I0805 23:12:01.621929   28839 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0805 23:12:01.625225   28839 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0805 23:12:02.122259   28839 round_trippers.go:463] GET https://192.168.39.57:8443/api/v1/nodes/ha-044175-m02
	I0805 23:12:02.122280   28839 round_trippers.go:469] Request Headers:
	I0805 23:12:02.122287   28839 round_trippers.go:473]     Accept: application/json, */*
	I0805 23:12:02.122291   28839 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0805 23:12:02.125617   28839 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0805 23:12:02.126221   28839 node_ready.go:53] node "ha-044175-m02" has status "Ready":"False"
	I0805 23:12:02.621442   28839 round_trippers.go:463] GET https://192.168.39.57:8443/api/v1/nodes/ha-044175-m02
	I0805 23:12:02.621465   28839 round_trippers.go:469] Request Headers:
	I0805 23:12:02.621476   28839 round_trippers.go:473]     Accept: application/json, */*
	I0805 23:12:02.621481   28839 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0805 23:12:02.624892   28839 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0805 23:12:03.122381   28839 round_trippers.go:463] GET https://192.168.39.57:8443/api/v1/nodes/ha-044175-m02
	I0805 23:12:03.122402   28839 round_trippers.go:469] Request Headers:
	I0805 23:12:03.122408   28839 round_trippers.go:473]     Accept: application/json, */*
	I0805 23:12:03.122412   28839 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0805 23:12:03.128861   28839 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0805 23:12:03.622378   28839 round_trippers.go:463] GET https://192.168.39.57:8443/api/v1/nodes/ha-044175-m02
	I0805 23:12:03.622400   28839 round_trippers.go:469] Request Headers:
	I0805 23:12:03.622411   28839 round_trippers.go:473]     Accept: application/json, */*
	I0805 23:12:03.622415   28839 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0805 23:12:03.625999   28839 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0805 23:12:04.122150   28839 round_trippers.go:463] GET https://192.168.39.57:8443/api/v1/nodes/ha-044175-m02
	I0805 23:12:04.122176   28839 round_trippers.go:469] Request Headers:
	I0805 23:12:04.122185   28839 round_trippers.go:473]     Accept: application/json, */*
	I0805 23:12:04.122191   28839 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0805 23:12:04.125491   28839 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0805 23:12:04.126353   28839 node_ready.go:53] node "ha-044175-m02" has status "Ready":"False"
	I0805 23:12:04.622334   28839 round_trippers.go:463] GET https://192.168.39.57:8443/api/v1/nodes/ha-044175-m02
	I0805 23:12:04.622358   28839 round_trippers.go:469] Request Headers:
	I0805 23:12:04.622364   28839 round_trippers.go:473]     Accept: application/json, */*
	I0805 23:12:04.622369   28839 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0805 23:12:04.626412   28839 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0805 23:12:05.121417   28839 round_trippers.go:463] GET https://192.168.39.57:8443/api/v1/nodes/ha-044175-m02
	I0805 23:12:05.121436   28839 round_trippers.go:469] Request Headers:
	I0805 23:12:05.121443   28839 round_trippers.go:473]     Accept: application/json, */*
	I0805 23:12:05.121446   28839 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0805 23:12:05.124733   28839 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0805 23:12:05.621624   28839 round_trippers.go:463] GET https://192.168.39.57:8443/api/v1/nodes/ha-044175-m02
	I0805 23:12:05.621646   28839 round_trippers.go:469] Request Headers:
	I0805 23:12:05.621653   28839 round_trippers.go:473]     Accept: application/json, */*
	I0805 23:12:05.621658   28839 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0805 23:12:05.625121   28839 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0805 23:12:06.122214   28839 round_trippers.go:463] GET https://192.168.39.57:8443/api/v1/nodes/ha-044175-m02
	I0805 23:12:06.122240   28839 round_trippers.go:469] Request Headers:
	I0805 23:12:06.122252   28839 round_trippers.go:473]     Accept: application/json, */*
	I0805 23:12:06.122259   28839 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0805 23:12:06.125766   28839 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0805 23:12:06.621591   28839 round_trippers.go:463] GET https://192.168.39.57:8443/api/v1/nodes/ha-044175-m02
	I0805 23:12:06.621613   28839 round_trippers.go:469] Request Headers:
	I0805 23:12:06.621620   28839 round_trippers.go:473]     Accept: application/json, */*
	I0805 23:12:06.621626   28839 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0805 23:12:06.625163   28839 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0805 23:12:06.625992   28839 node_ready.go:53] node "ha-044175-m02" has status "Ready":"False"
	I0805 23:12:07.121365   28839 round_trippers.go:463] GET https://192.168.39.57:8443/api/v1/nodes/ha-044175-m02
	I0805 23:12:07.121389   28839 round_trippers.go:469] Request Headers:
	I0805 23:12:07.121399   28839 round_trippers.go:473]     Accept: application/json, */*
	I0805 23:12:07.121404   28839 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0805 23:12:07.125591   28839 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0805 23:12:07.621815   28839 round_trippers.go:463] GET https://192.168.39.57:8443/api/v1/nodes/ha-044175-m02
	I0805 23:12:07.621848   28839 round_trippers.go:469] Request Headers:
	I0805 23:12:07.621858   28839 round_trippers.go:473]     Accept: application/json, */*
	I0805 23:12:07.621862   28839 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0805 23:12:07.625062   28839 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0805 23:12:08.121368   28839 round_trippers.go:463] GET https://192.168.39.57:8443/api/v1/nodes/ha-044175-m02
	I0805 23:12:08.121402   28839 round_trippers.go:469] Request Headers:
	I0805 23:12:08.121409   28839 round_trippers.go:473]     Accept: application/json, */*
	I0805 23:12:08.121412   28839 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0805 23:12:08.124749   28839 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0805 23:12:08.622066   28839 round_trippers.go:463] GET https://192.168.39.57:8443/api/v1/nodes/ha-044175-m02
	I0805 23:12:08.622091   28839 round_trippers.go:469] Request Headers:
	I0805 23:12:08.622100   28839 round_trippers.go:473]     Accept: application/json, */*
	I0805 23:12:08.622105   28839 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0805 23:12:08.625485   28839 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0805 23:12:08.626361   28839 node_ready.go:53] node "ha-044175-m02" has status "Ready":"False"
	I0805 23:12:09.121550   28839 round_trippers.go:463] GET https://192.168.39.57:8443/api/v1/nodes/ha-044175-m02
	I0805 23:12:09.121572   28839 round_trippers.go:469] Request Headers:
	I0805 23:12:09.121580   28839 round_trippers.go:473]     Accept: application/json, */*
	I0805 23:12:09.121583   28839 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0805 23:12:09.124987   28839 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0805 23:12:09.621668   28839 round_trippers.go:463] GET https://192.168.39.57:8443/api/v1/nodes/ha-044175-m02
	I0805 23:12:09.621691   28839 round_trippers.go:469] Request Headers:
	I0805 23:12:09.621710   28839 round_trippers.go:473]     Accept: application/json, */*
	I0805 23:12:09.621715   28839 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0805 23:12:09.625294   28839 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0805 23:12:10.121590   28839 round_trippers.go:463] GET https://192.168.39.57:8443/api/v1/nodes/ha-044175-m02
	I0805 23:12:10.121624   28839 round_trippers.go:469] Request Headers:
	I0805 23:12:10.121633   28839 round_trippers.go:473]     Accept: application/json, */*
	I0805 23:12:10.121636   28839 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0805 23:12:10.126103   28839 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0805 23:12:10.621530   28839 round_trippers.go:463] GET https://192.168.39.57:8443/api/v1/nodes/ha-044175-m02
	I0805 23:12:10.621551   28839 round_trippers.go:469] Request Headers:
	I0805 23:12:10.621560   28839 round_trippers.go:473]     Accept: application/json, */*
	I0805 23:12:10.621565   28839 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0805 23:12:10.624726   28839 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0805 23:12:11.121423   28839 round_trippers.go:463] GET https://192.168.39.57:8443/api/v1/nodes/ha-044175-m02
	I0805 23:12:11.121444   28839 round_trippers.go:469] Request Headers:
	I0805 23:12:11.121452   28839 round_trippers.go:473]     Accept: application/json, */*
	I0805 23:12:11.121455   28839 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0805 23:12:11.124911   28839 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0805 23:12:11.125341   28839 node_ready.go:53] node "ha-044175-m02" has status "Ready":"False"
	I0805 23:12:11.621691   28839 round_trippers.go:463] GET https://192.168.39.57:8443/api/v1/nodes/ha-044175-m02
	I0805 23:12:11.621716   28839 round_trippers.go:469] Request Headers:
	I0805 23:12:11.621726   28839 round_trippers.go:473]     Accept: application/json, */*
	I0805 23:12:11.621731   28839 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0805 23:12:11.625061   28839 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0805 23:12:12.121996   28839 round_trippers.go:463] GET https://192.168.39.57:8443/api/v1/nodes/ha-044175-m02
	I0805 23:12:12.122028   28839 round_trippers.go:469] Request Headers:
	I0805 23:12:12.122036   28839 round_trippers.go:473]     Accept: application/json, */*
	I0805 23:12:12.122042   28839 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0805 23:12:12.125612   28839 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0805 23:12:12.126424   28839 node_ready.go:49] node "ha-044175-m02" has status "Ready":"True"
	I0805 23:12:12.126449   28839 node_ready.go:38] duration metric: took 19.00525469s for node "ha-044175-m02" to be "Ready" ...
	I0805 23:12:12.126465   28839 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0805 23:12:12.126527   28839 round_trippers.go:463] GET https://192.168.39.57:8443/api/v1/namespaces/kube-system/pods
	I0805 23:12:12.126536   28839 round_trippers.go:469] Request Headers:
	I0805 23:12:12.126543   28839 round_trippers.go:473]     Accept: application/json, */*
	I0805 23:12:12.126551   28839 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0805 23:12:12.135406   28839 round_trippers.go:574] Response Status: 200 OK in 8 milliseconds
	I0805 23:12:12.143222   28839 pod_ready.go:78] waiting up to 6m0s for pod "coredns-7db6d8ff4d-g9bml" in "kube-system" namespace to be "Ready" ...
	I0805 23:12:12.143311   28839 round_trippers.go:463] GET https://192.168.39.57:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-g9bml
	I0805 23:12:12.143319   28839 round_trippers.go:469] Request Headers:
	I0805 23:12:12.143326   28839 round_trippers.go:473]     Accept: application/json, */*
	I0805 23:12:12.143334   28839 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0805 23:12:12.146612   28839 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0805 23:12:12.147424   28839 round_trippers.go:463] GET https://192.168.39.57:8443/api/v1/nodes/ha-044175
	I0805 23:12:12.147441   28839 round_trippers.go:469] Request Headers:
	I0805 23:12:12.147449   28839 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0805 23:12:12.147454   28839 round_trippers.go:473]     Accept: application/json, */*
	I0805 23:12:12.150255   28839 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0805 23:12:12.150818   28839 pod_ready.go:92] pod "coredns-7db6d8ff4d-g9bml" in "kube-system" namespace has status "Ready":"True"
	I0805 23:12:12.150839   28839 pod_ready.go:81] duration metric: took 7.590146ms for pod "coredns-7db6d8ff4d-g9bml" in "kube-system" namespace to be "Ready" ...
	I0805 23:12:12.150848   28839 pod_ready.go:78] waiting up to 6m0s for pod "coredns-7db6d8ff4d-vzhst" in "kube-system" namespace to be "Ready" ...
	I0805 23:12:12.150942   28839 round_trippers.go:463] GET https://192.168.39.57:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-vzhst
	I0805 23:12:12.150952   28839 round_trippers.go:469] Request Headers:
	I0805 23:12:12.150959   28839 round_trippers.go:473]     Accept: application/json, */*
	I0805 23:12:12.150963   28839 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0805 23:12:12.153621   28839 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0805 23:12:12.154355   28839 round_trippers.go:463] GET https://192.168.39.57:8443/api/v1/nodes/ha-044175
	I0805 23:12:12.154370   28839 round_trippers.go:469] Request Headers:
	I0805 23:12:12.154378   28839 round_trippers.go:473]     Accept: application/json, */*
	I0805 23:12:12.154382   28839 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0805 23:12:12.156751   28839 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0805 23:12:12.157366   28839 pod_ready.go:92] pod "coredns-7db6d8ff4d-vzhst" in "kube-system" namespace has status "Ready":"True"
	I0805 23:12:12.157390   28839 pod_ready.go:81] duration metric: took 6.536219ms for pod "coredns-7db6d8ff4d-vzhst" in "kube-system" namespace to be "Ready" ...
	I0805 23:12:12.157401   28839 pod_ready.go:78] waiting up to 6m0s for pod "etcd-ha-044175" in "kube-system" namespace to be "Ready" ...
	I0805 23:12:12.157450   28839 round_trippers.go:463] GET https://192.168.39.57:8443/api/v1/namespaces/kube-system/pods/etcd-ha-044175
	I0805 23:12:12.157457   28839 round_trippers.go:469] Request Headers:
	I0805 23:12:12.157465   28839 round_trippers.go:473]     Accept: application/json, */*
	I0805 23:12:12.157468   28839 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0805 23:12:12.159895   28839 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0805 23:12:12.160437   28839 round_trippers.go:463] GET https://192.168.39.57:8443/api/v1/nodes/ha-044175
	I0805 23:12:12.160451   28839 round_trippers.go:469] Request Headers:
	I0805 23:12:12.160457   28839 round_trippers.go:473]     Accept: application/json, */*
	I0805 23:12:12.160461   28839 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0805 23:12:12.162694   28839 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0805 23:12:12.163116   28839 pod_ready.go:92] pod "etcd-ha-044175" in "kube-system" namespace has status "Ready":"True"
	I0805 23:12:12.163134   28839 pod_ready.go:81] duration metric: took 5.728191ms for pod "etcd-ha-044175" in "kube-system" namespace to be "Ready" ...
	I0805 23:12:12.163143   28839 pod_ready.go:78] waiting up to 6m0s for pod "etcd-ha-044175-m02" in "kube-system" namespace to be "Ready" ...
	I0805 23:12:12.163194   28839 round_trippers.go:463] GET https://192.168.39.57:8443/api/v1/namespaces/kube-system/pods/etcd-ha-044175-m02
	I0805 23:12:12.163203   28839 round_trippers.go:469] Request Headers:
	I0805 23:12:12.163210   28839 round_trippers.go:473]     Accept: application/json, */*
	I0805 23:12:12.163213   28839 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0805 23:12:12.166402   28839 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0805 23:12:12.167376   28839 round_trippers.go:463] GET https://192.168.39.57:8443/api/v1/nodes/ha-044175-m02
	I0805 23:12:12.167393   28839 round_trippers.go:469] Request Headers:
	I0805 23:12:12.167401   28839 round_trippers.go:473]     Accept: application/json, */*
	I0805 23:12:12.167404   28839 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0805 23:12:12.169619   28839 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0805 23:12:12.170517   28839 pod_ready.go:92] pod "etcd-ha-044175-m02" in "kube-system" namespace has status "Ready":"True"
	I0805 23:12:12.170534   28839 pod_ready.go:81] duration metric: took 7.385716ms for pod "etcd-ha-044175-m02" in "kube-system" namespace to be "Ready" ...
	I0805 23:12:12.170547   28839 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-ha-044175" in "kube-system" namespace to be "Ready" ...
	I0805 23:12:12.322937   28839 request.go:629] Waited for 152.336703ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.57:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-044175
	I0805 23:12:12.323005   28839 round_trippers.go:463] GET https://192.168.39.57:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-044175
	I0805 23:12:12.323012   28839 round_trippers.go:469] Request Headers:
	I0805 23:12:12.323021   28839 round_trippers.go:473]     Accept: application/json, */*
	I0805 23:12:12.323027   28839 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0805 23:12:12.326531   28839 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0805 23:12:12.522848   28839 request.go:629] Waited for 195.379036ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.57:8443/api/v1/nodes/ha-044175
	I0805 23:12:12.522933   28839 round_trippers.go:463] GET https://192.168.39.57:8443/api/v1/nodes/ha-044175
	I0805 23:12:12.522940   28839 round_trippers.go:469] Request Headers:
	I0805 23:12:12.522947   28839 round_trippers.go:473]     Accept: application/json, */*
	I0805 23:12:12.522951   28839 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0805 23:12:12.526139   28839 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0805 23:12:12.526660   28839 pod_ready.go:92] pod "kube-apiserver-ha-044175" in "kube-system" namespace has status "Ready":"True"
	I0805 23:12:12.526677   28839 pod_ready.go:81] duration metric: took 356.124671ms for pod "kube-apiserver-ha-044175" in "kube-system" namespace to be "Ready" ...
	I0805 23:12:12.526687   28839 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-ha-044175-m02" in "kube-system" namespace to be "Ready" ...
	I0805 23:12:12.722910   28839 request.go:629] Waited for 196.160326ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.57:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-044175-m02
	I0805 23:12:12.723002   28839 round_trippers.go:463] GET https://192.168.39.57:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-044175-m02
	I0805 23:12:12.723010   28839 round_trippers.go:469] Request Headers:
	I0805 23:12:12.723018   28839 round_trippers.go:473]     Accept: application/json, */*
	I0805 23:12:12.723028   28839 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0805 23:12:12.726207   28839 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0805 23:12:12.922332   28839 request.go:629] Waited for 195.350633ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.57:8443/api/v1/nodes/ha-044175-m02
	I0805 23:12:12.922388   28839 round_trippers.go:463] GET https://192.168.39.57:8443/api/v1/nodes/ha-044175-m02
	I0805 23:12:12.922393   28839 round_trippers.go:469] Request Headers:
	I0805 23:12:12.922400   28839 round_trippers.go:473]     Accept: application/json, */*
	I0805 23:12:12.922404   28839 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0805 23:12:12.925742   28839 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0805 23:12:12.926446   28839 pod_ready.go:92] pod "kube-apiserver-ha-044175-m02" in "kube-system" namespace has status "Ready":"True"
	I0805 23:12:12.926465   28839 pod_ready.go:81] duration metric: took 399.771524ms for pod "kube-apiserver-ha-044175-m02" in "kube-system" namespace to be "Ready" ...
	I0805 23:12:12.926475   28839 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-ha-044175" in "kube-system" namespace to be "Ready" ...
	I0805 23:12:13.122661   28839 request.go:629] Waited for 196.12267ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.57:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-044175
	I0805 23:12:13.122738   28839 round_trippers.go:463] GET https://192.168.39.57:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-044175
	I0805 23:12:13.122743   28839 round_trippers.go:469] Request Headers:
	I0805 23:12:13.122751   28839 round_trippers.go:473]     Accept: application/json, */*
	I0805 23:12:13.122756   28839 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0805 23:12:13.125878   28839 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0805 23:12:13.322755   28839 request.go:629] Waited for 196.363874ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.57:8443/api/v1/nodes/ha-044175
	I0805 23:12:13.322812   28839 round_trippers.go:463] GET https://192.168.39.57:8443/api/v1/nodes/ha-044175
	I0805 23:12:13.322817   28839 round_trippers.go:469] Request Headers:
	I0805 23:12:13.322825   28839 round_trippers.go:473]     Accept: application/json, */*
	I0805 23:12:13.322836   28839 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0805 23:12:13.326871   28839 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0805 23:12:13.327724   28839 pod_ready.go:92] pod "kube-controller-manager-ha-044175" in "kube-system" namespace has status "Ready":"True"
	I0805 23:12:13.327746   28839 pod_ready.go:81] duration metric: took 401.265029ms for pod "kube-controller-manager-ha-044175" in "kube-system" namespace to be "Ready" ...
	I0805 23:12:13.327757   28839 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-ha-044175-m02" in "kube-system" namespace to be "Ready" ...
	I0805 23:12:13.522071   28839 request.go:629] Waited for 194.256831ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.57:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-044175-m02
	I0805 23:12:13.522134   28839 round_trippers.go:463] GET https://192.168.39.57:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-044175-m02
	I0805 23:12:13.522139   28839 round_trippers.go:469] Request Headers:
	I0805 23:12:13.522158   28839 round_trippers.go:473]     Accept: application/json, */*
	I0805 23:12:13.522162   28839 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0805 23:12:13.526485   28839 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0805 23:12:13.722163   28839 request.go:629] Waited for 194.278089ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.57:8443/api/v1/nodes/ha-044175-m02
	I0805 23:12:13.722215   28839 round_trippers.go:463] GET https://192.168.39.57:8443/api/v1/nodes/ha-044175-m02
	I0805 23:12:13.722220   28839 round_trippers.go:469] Request Headers:
	I0805 23:12:13.722228   28839 round_trippers.go:473]     Accept: application/json, */*
	I0805 23:12:13.722231   28839 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0805 23:12:13.725186   28839 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0805 23:12:13.725923   28839 pod_ready.go:92] pod "kube-controller-manager-ha-044175-m02" in "kube-system" namespace has status "Ready":"True"
	I0805 23:12:13.725941   28839 pod_ready.go:81] duration metric: took 398.177359ms for pod "kube-controller-manager-ha-044175-m02" in "kube-system" namespace to be "Ready" ...
	I0805 23:12:13.725952   28839 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-jfs9q" in "kube-system" namespace to be "Ready" ...
	I0805 23:12:13.923099   28839 request.go:629] Waited for 197.047899ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.57:8443/api/v1/namespaces/kube-system/pods/kube-proxy-jfs9q
	I0805 23:12:13.923162   28839 round_trippers.go:463] GET https://192.168.39.57:8443/api/v1/namespaces/kube-system/pods/kube-proxy-jfs9q
	I0805 23:12:13.923167   28839 round_trippers.go:469] Request Headers:
	I0805 23:12:13.923175   28839 round_trippers.go:473]     Accept: application/json, */*
	I0805 23:12:13.923180   28839 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0805 23:12:13.926625   28839 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0805 23:12:14.122760   28839 request.go:629] Waited for 195.388811ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.57:8443/api/v1/nodes/ha-044175-m02
	I0805 23:12:14.122819   28839 round_trippers.go:463] GET https://192.168.39.57:8443/api/v1/nodes/ha-044175-m02
	I0805 23:12:14.122825   28839 round_trippers.go:469] Request Headers:
	I0805 23:12:14.122833   28839 round_trippers.go:473]     Accept: application/json, */*
	I0805 23:12:14.122837   28839 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0805 23:12:14.126522   28839 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0805 23:12:14.127848   28839 pod_ready.go:92] pod "kube-proxy-jfs9q" in "kube-system" namespace has status "Ready":"True"
	I0805 23:12:14.127874   28839 pod_ready.go:81] duration metric: took 401.91347ms for pod "kube-proxy-jfs9q" in "kube-system" namespace to be "Ready" ...
	I0805 23:12:14.127887   28839 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-vj5sd" in "kube-system" namespace to be "Ready" ...
	I0805 23:12:14.322970   28839 request.go:629] Waited for 194.988509ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.57:8443/api/v1/namespaces/kube-system/pods/kube-proxy-vj5sd
	I0805 23:12:14.323029   28839 round_trippers.go:463] GET https://192.168.39.57:8443/api/v1/namespaces/kube-system/pods/kube-proxy-vj5sd
	I0805 23:12:14.323035   28839 round_trippers.go:469] Request Headers:
	I0805 23:12:14.323042   28839 round_trippers.go:473]     Accept: application/json, */*
	I0805 23:12:14.323046   28839 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0805 23:12:14.326746   28839 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0805 23:12:14.522119   28839 request.go:629] Waited for 194.304338ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.57:8443/api/v1/nodes/ha-044175
	I0805 23:12:14.522179   28839 round_trippers.go:463] GET https://192.168.39.57:8443/api/v1/nodes/ha-044175
	I0805 23:12:14.522192   28839 round_trippers.go:469] Request Headers:
	I0805 23:12:14.522214   28839 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0805 23:12:14.522220   28839 round_trippers.go:473]     Accept: application/json, */*
	I0805 23:12:14.528370   28839 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0805 23:12:14.529018   28839 pod_ready.go:92] pod "kube-proxy-vj5sd" in "kube-system" namespace has status "Ready":"True"
	I0805 23:12:14.529040   28839 pod_ready.go:81] duration metric: took 401.145004ms for pod "kube-proxy-vj5sd" in "kube-system" namespace to be "Ready" ...
	I0805 23:12:14.529049   28839 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-ha-044175" in "kube-system" namespace to be "Ready" ...
	I0805 23:12:14.722709   28839 request.go:629] Waited for 193.590518ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.57:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-044175
	I0805 23:12:14.722779   28839 round_trippers.go:463] GET https://192.168.39.57:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-044175
	I0805 23:12:14.722788   28839 round_trippers.go:469] Request Headers:
	I0805 23:12:14.722798   28839 round_trippers.go:473]     Accept: application/json, */*
	I0805 23:12:14.722804   28839 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0805 23:12:14.727234   28839 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0805 23:12:14.922371   28839 request.go:629] Waited for 194.386722ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.57:8443/api/v1/nodes/ha-044175
	I0805 23:12:14.922432   28839 round_trippers.go:463] GET https://192.168.39.57:8443/api/v1/nodes/ha-044175
	I0805 23:12:14.922439   28839 round_trippers.go:469] Request Headers:
	I0805 23:12:14.922448   28839 round_trippers.go:473]     Accept: application/json, */*
	I0805 23:12:14.922453   28839 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0805 23:12:14.926046   28839 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0805 23:12:14.926586   28839 pod_ready.go:92] pod "kube-scheduler-ha-044175" in "kube-system" namespace has status "Ready":"True"
	I0805 23:12:14.926604   28839 pod_ready.go:81] duration metric: took 397.548315ms for pod "kube-scheduler-ha-044175" in "kube-system" namespace to be "Ready" ...
	I0805 23:12:14.926613   28839 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-ha-044175-m02" in "kube-system" namespace to be "Ready" ...
	I0805 23:12:15.122823   28839 request.go:629] Waited for 196.138859ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.57:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-044175-m02
	I0805 23:12:15.122879   28839 round_trippers.go:463] GET https://192.168.39.57:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-044175-m02
	I0805 23:12:15.122885   28839 round_trippers.go:469] Request Headers:
	I0805 23:12:15.122895   28839 round_trippers.go:473]     Accept: application/json, */*
	I0805 23:12:15.122903   28839 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0805 23:12:15.126637   28839 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0805 23:12:15.322528   28839 request.go:629] Waited for 194.388628ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.57:8443/api/v1/nodes/ha-044175-m02
	I0805 23:12:15.322589   28839 round_trippers.go:463] GET https://192.168.39.57:8443/api/v1/nodes/ha-044175-m02
	I0805 23:12:15.322594   28839 round_trippers.go:469] Request Headers:
	I0805 23:12:15.322601   28839 round_trippers.go:473]     Accept: application/json, */*
	I0805 23:12:15.322605   28839 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0805 23:12:15.325830   28839 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0805 23:12:15.326484   28839 pod_ready.go:92] pod "kube-scheduler-ha-044175-m02" in "kube-system" namespace has status "Ready":"True"
	I0805 23:12:15.326500   28839 pod_ready.go:81] duration metric: took 399.881115ms for pod "kube-scheduler-ha-044175-m02" in "kube-system" namespace to be "Ready" ...
	I0805 23:12:15.326513   28839 pod_ready.go:38] duration metric: took 3.200030463s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0805 23:12:15.326536   28839 api_server.go:52] waiting for apiserver process to appear ...
	I0805 23:12:15.326592   28839 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0805 23:12:15.343544   28839 api_server.go:72] duration metric: took 22.543885874s to wait for apiserver process to appear ...
	I0805 23:12:15.343576   28839 api_server.go:88] waiting for apiserver healthz status ...
	I0805 23:12:15.343604   28839 api_server.go:253] Checking apiserver healthz at https://192.168.39.57:8443/healthz ...
	I0805 23:12:15.348183   28839 api_server.go:279] https://192.168.39.57:8443/healthz returned 200:
	ok
	I0805 23:12:15.348281   28839 round_trippers.go:463] GET https://192.168.39.57:8443/version
	I0805 23:12:15.348293   28839 round_trippers.go:469] Request Headers:
	I0805 23:12:15.348301   28839 round_trippers.go:473]     Accept: application/json, */*
	I0805 23:12:15.348305   28839 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0805 23:12:15.349226   28839 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0805 23:12:15.349349   28839 api_server.go:141] control plane version: v1.30.3
	I0805 23:12:15.349368   28839 api_server.go:131] duration metric: took 5.784906ms to wait for apiserver health ...
	I0805 23:12:15.349383   28839 system_pods.go:43] waiting for kube-system pods to appear ...
	I0805 23:12:15.522875   28839 request.go:629] Waited for 173.4123ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.57:8443/api/v1/namespaces/kube-system/pods
	I0805 23:12:15.522929   28839 round_trippers.go:463] GET https://192.168.39.57:8443/api/v1/namespaces/kube-system/pods
	I0805 23:12:15.522934   28839 round_trippers.go:469] Request Headers:
	I0805 23:12:15.522942   28839 round_trippers.go:473]     Accept: application/json, */*
	I0805 23:12:15.522946   28839 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0805 23:12:15.528927   28839 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0805 23:12:15.534765   28839 system_pods.go:59] 17 kube-system pods found
	I0805 23:12:15.534810   28839 system_pods.go:61] "coredns-7db6d8ff4d-g9bml" [fd474413-e416-48db-a7bf-f3c40675819b] Running
	I0805 23:12:15.534817   28839 system_pods.go:61] "coredns-7db6d8ff4d-vzhst" [f9c09745-be29-4403-9e7d-f9e4eaae5cac] Running
	I0805 23:12:15.534821   28839 system_pods.go:61] "etcd-ha-044175" [f9008d52-5a0c-4a6b-9cdf-7df18dd78752] Running
	I0805 23:12:15.534824   28839 system_pods.go:61] "etcd-ha-044175-m02" [773f42be-f8b5-47f0-bcd0-36bd6ae24bab] Running
	I0805 23:12:15.534828   28839 system_pods.go:61] "kindnet-hqhgc" [de6b28dc-79ea-43af-868e-e32180dcd5f2] Running
	I0805 23:12:15.534833   28839 system_pods.go:61] "kindnet-xqx4z" [8455705e-b140-4f1e-abff-6a71bbb5415f] Running
	I0805 23:12:15.534838   28839 system_pods.go:61] "kube-apiserver-ha-044175" [4e39654d-531d-4cf4-b4a9-beeada8e8d05] Running
	I0805 23:12:15.534842   28839 system_pods.go:61] "kube-apiserver-ha-044175-m02" [06dfad00-f627-43cd-abea-c3a34d423964] Running
	I0805 23:12:15.534847   28839 system_pods.go:61] "kube-controller-manager-ha-044175" [d6f6d163-103f-4af4-976f-c255d1933bb2] Running
	I0805 23:12:15.534855   28839 system_pods.go:61] "kube-controller-manager-ha-044175-m02" [1bf050d3-1969-4ca1-89d3-f729989fd6b8] Running
	I0805 23:12:15.534864   28839 system_pods.go:61] "kube-proxy-jfs9q" [d8d0b4df-e1e1-4354-ba55-594dec7d1e89] Running
	I0805 23:12:15.534868   28839 system_pods.go:61] "kube-proxy-vj5sd" [d6c9cdcb-e1b7-44c8-a6e3-5e5aeb76ba03] Running
	I0805 23:12:15.534872   28839 system_pods.go:61] "kube-scheduler-ha-044175" [41c96a32-1b26-4e05-a21a-48c4fd913b9f] Running
	I0805 23:12:15.534878   28839 system_pods.go:61] "kube-scheduler-ha-044175-m02" [8e41f86c-0b86-40be-a524-fbae6283693d] Running
	I0805 23:12:15.534881   28839 system_pods.go:61] "kube-vip-ha-044175" [505ff885-b8a0-48bd-8d1e-81e4583b48af] Running
	I0805 23:12:15.534884   28839 system_pods.go:61] "kube-vip-ha-044175-m02" [ffbecaef-6482-4c4e-8268-4b66e4799be5] Running
	I0805 23:12:15.534888   28839 system_pods.go:61] "storage-provisioner" [d30d1a5b-cfbe-4de6-a964-75c32e5dbf62] Running
	I0805 23:12:15.534893   28839 system_pods.go:74] duration metric: took 185.501567ms to wait for pod list to return data ...
	I0805 23:12:15.534904   28839 default_sa.go:34] waiting for default service account to be created ...
	I0805 23:12:15.722680   28839 request.go:629] Waited for 187.701592ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.57:8443/api/v1/namespaces/default/serviceaccounts
	I0805 23:12:15.722770   28839 round_trippers.go:463] GET https://192.168.39.57:8443/api/v1/namespaces/default/serviceaccounts
	I0805 23:12:15.722782   28839 round_trippers.go:469] Request Headers:
	I0805 23:12:15.722792   28839 round_trippers.go:473]     Accept: application/json, */*
	I0805 23:12:15.722800   28839 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0805 23:12:15.726559   28839 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0805 23:12:15.726832   28839 default_sa.go:45] found service account: "default"
	I0805 23:12:15.726852   28839 default_sa.go:55] duration metric: took 191.941352ms for default service account to be created ...
	I0805 23:12:15.726863   28839 system_pods.go:116] waiting for k8s-apps to be running ...
	I0805 23:12:15.922594   28839 request.go:629] Waited for 195.648365ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.57:8443/api/v1/namespaces/kube-system/pods
	I0805 23:12:15.922662   28839 round_trippers.go:463] GET https://192.168.39.57:8443/api/v1/namespaces/kube-system/pods
	I0805 23:12:15.922669   28839 round_trippers.go:469] Request Headers:
	I0805 23:12:15.922679   28839 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0805 23:12:15.922684   28839 round_trippers.go:473]     Accept: application/json, */*
	I0805 23:12:15.928553   28839 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0805 23:12:15.933038   28839 system_pods.go:86] 17 kube-system pods found
	I0805 23:12:15.933072   28839 system_pods.go:89] "coredns-7db6d8ff4d-g9bml" [fd474413-e416-48db-a7bf-f3c40675819b] Running
	I0805 23:12:15.933081   28839 system_pods.go:89] "coredns-7db6d8ff4d-vzhst" [f9c09745-be29-4403-9e7d-f9e4eaae5cac] Running
	I0805 23:12:15.933089   28839 system_pods.go:89] "etcd-ha-044175" [f9008d52-5a0c-4a6b-9cdf-7df18dd78752] Running
	I0805 23:12:15.933096   28839 system_pods.go:89] "etcd-ha-044175-m02" [773f42be-f8b5-47f0-bcd0-36bd6ae24bab] Running
	I0805 23:12:15.933102   28839 system_pods.go:89] "kindnet-hqhgc" [de6b28dc-79ea-43af-868e-e32180dcd5f2] Running
	I0805 23:12:15.933109   28839 system_pods.go:89] "kindnet-xqx4z" [8455705e-b140-4f1e-abff-6a71bbb5415f] Running
	I0805 23:12:15.933116   28839 system_pods.go:89] "kube-apiserver-ha-044175" [4e39654d-531d-4cf4-b4a9-beeada8e8d05] Running
	I0805 23:12:15.933123   28839 system_pods.go:89] "kube-apiserver-ha-044175-m02" [06dfad00-f627-43cd-abea-c3a34d423964] Running
	I0805 23:12:15.933131   28839 system_pods.go:89] "kube-controller-manager-ha-044175" [d6f6d163-103f-4af4-976f-c255d1933bb2] Running
	I0805 23:12:15.933142   28839 system_pods.go:89] "kube-controller-manager-ha-044175-m02" [1bf050d3-1969-4ca1-89d3-f729989fd6b8] Running
	I0805 23:12:15.933153   28839 system_pods.go:89] "kube-proxy-jfs9q" [d8d0b4df-e1e1-4354-ba55-594dec7d1e89] Running
	I0805 23:12:15.933161   28839 system_pods.go:89] "kube-proxy-vj5sd" [d6c9cdcb-e1b7-44c8-a6e3-5e5aeb76ba03] Running
	I0805 23:12:15.933169   28839 system_pods.go:89] "kube-scheduler-ha-044175" [41c96a32-1b26-4e05-a21a-48c4fd913b9f] Running
	I0805 23:12:15.933177   28839 system_pods.go:89] "kube-scheduler-ha-044175-m02" [8e41f86c-0b86-40be-a524-fbae6283693d] Running
	I0805 23:12:15.933185   28839 system_pods.go:89] "kube-vip-ha-044175" [505ff885-b8a0-48bd-8d1e-81e4583b48af] Running
	I0805 23:12:15.933192   28839 system_pods.go:89] "kube-vip-ha-044175-m02" [ffbecaef-6482-4c4e-8268-4b66e4799be5] Running
	I0805 23:12:15.933201   28839 system_pods.go:89] "storage-provisioner" [d30d1a5b-cfbe-4de6-a964-75c32e5dbf62] Running
	I0805 23:12:15.933214   28839 system_pods.go:126] duration metric: took 206.344214ms to wait for k8s-apps to be running ...
	I0805 23:12:15.933225   28839 system_svc.go:44] waiting for kubelet service to be running ....
	I0805 23:12:15.933286   28839 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0805 23:12:15.951297   28839 system_svc.go:56] duration metric: took 18.065984ms WaitForService to wait for kubelet
	I0805 23:12:15.951329   28839 kubeadm.go:582] duration metric: took 23.151674816s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0805 23:12:15.951350   28839 node_conditions.go:102] verifying NodePressure condition ...
	I0805 23:12:16.122799   28839 request.go:629] Waited for 171.37013ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.57:8443/api/v1/nodes
	I0805 23:12:16.122865   28839 round_trippers.go:463] GET https://192.168.39.57:8443/api/v1/nodes
	I0805 23:12:16.122880   28839 round_trippers.go:469] Request Headers:
	I0805 23:12:16.122891   28839 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0805 23:12:16.122901   28839 round_trippers.go:473]     Accept: application/json, */*
	I0805 23:12:16.131431   28839 round_trippers.go:574] Response Status: 200 OK in 8 milliseconds
	I0805 23:12:16.132476   28839 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0805 23:12:16.132503   28839 node_conditions.go:123] node cpu capacity is 2
	I0805 23:12:16.132523   28839 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0805 23:12:16.132527   28839 node_conditions.go:123] node cpu capacity is 2
	I0805 23:12:16.132531   28839 node_conditions.go:105] duration metric: took 181.176198ms to run NodePressure ...
	I0805 23:12:16.132544   28839 start.go:241] waiting for startup goroutines ...
	I0805 23:12:16.132575   28839 start.go:255] writing updated cluster config ...
	I0805 23:12:16.135079   28839 out.go:177] 
	I0805 23:12:16.136635   28839 config.go:182] Loaded profile config "ha-044175": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0805 23:12:16.136721   28839 profile.go:143] Saving config to /home/jenkins/minikube-integration/19373-9606/.minikube/profiles/ha-044175/config.json ...
	I0805 23:12:16.138404   28839 out.go:177] * Starting "ha-044175-m03" control-plane node in "ha-044175" cluster
	I0805 23:12:16.139831   28839 preload.go:131] Checking if preload exists for k8s version v1.30.3 and runtime crio
	I0805 23:12:16.139854   28839 cache.go:56] Caching tarball of preloaded images
	I0805 23:12:16.139981   28839 preload.go:172] Found /home/jenkins/minikube-integration/19373-9606/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0805 23:12:16.140001   28839 cache.go:59] Finished verifying existence of preloaded tar for v1.30.3 on crio
	I0805 23:12:16.140108   28839 profile.go:143] Saving config to /home/jenkins/minikube-integration/19373-9606/.minikube/profiles/ha-044175/config.json ...
	I0805 23:12:16.140337   28839 start.go:360] acquireMachinesLock for ha-044175-m03: {Name:mkd2ba511c39504598222edbf83078b718329186 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0805 23:12:16.140400   28839 start.go:364] duration metric: took 35.222µs to acquireMachinesLock for "ha-044175-m03"
	I0805 23:12:16.140420   28839 start.go:93] Provisioning new machine with config: &{Name:ha-044175 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19339/minikube-v1.33.1-1722248113-19339-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kubernete
sVersion:v1.30.3 ClusterName:ha-044175 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.57 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.112 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m03 IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-d
ns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror:
DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name:m03 IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0805 23:12:16.140537   28839 start.go:125] createHost starting for "m03" (driver="kvm2")
	I0805 23:12:16.142457   28839 out.go:204] * Creating kvm2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0805 23:12:16.142624   28839 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0805 23:12:16.142673   28839 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0805 23:12:16.158944   28839 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40793
	I0805 23:12:16.159390   28839 main.go:141] libmachine: () Calling .GetVersion
	I0805 23:12:16.159849   28839 main.go:141] libmachine: Using API Version  1
	I0805 23:12:16.159866   28839 main.go:141] libmachine: () Calling .SetConfigRaw
	I0805 23:12:16.160215   28839 main.go:141] libmachine: () Calling .GetMachineName
	I0805 23:12:16.160411   28839 main.go:141] libmachine: (ha-044175-m03) Calling .GetMachineName
	I0805 23:12:16.160572   28839 main.go:141] libmachine: (ha-044175-m03) Calling .DriverName
	I0805 23:12:16.160737   28839 start.go:159] libmachine.API.Create for "ha-044175" (driver="kvm2")
	I0805 23:12:16.160771   28839 client.go:168] LocalClient.Create starting
	I0805 23:12:16.160810   28839 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/19373-9606/.minikube/certs/ca.pem
	I0805 23:12:16.160850   28839 main.go:141] libmachine: Decoding PEM data...
	I0805 23:12:16.160868   28839 main.go:141] libmachine: Parsing certificate...
	I0805 23:12:16.160921   28839 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/19373-9606/.minikube/certs/cert.pem
	I0805 23:12:16.160944   28839 main.go:141] libmachine: Decoding PEM data...
	I0805 23:12:16.160959   28839 main.go:141] libmachine: Parsing certificate...
	I0805 23:12:16.160978   28839 main.go:141] libmachine: Running pre-create checks...
	I0805 23:12:16.160993   28839 main.go:141] libmachine: (ha-044175-m03) Calling .PreCreateCheck
	I0805 23:12:16.161166   28839 main.go:141] libmachine: (ha-044175-m03) Calling .GetConfigRaw
	I0805 23:12:16.161570   28839 main.go:141] libmachine: Creating machine...
	I0805 23:12:16.161583   28839 main.go:141] libmachine: (ha-044175-m03) Calling .Create
	I0805 23:12:16.161702   28839 main.go:141] libmachine: (ha-044175-m03) Creating KVM machine...
	I0805 23:12:16.163133   28839 main.go:141] libmachine: (ha-044175-m03) DBG | found existing default KVM network
	I0805 23:12:16.163285   28839 main.go:141] libmachine: (ha-044175-m03) DBG | found existing private KVM network mk-ha-044175
	I0805 23:12:16.163415   28839 main.go:141] libmachine: (ha-044175-m03) Setting up store path in /home/jenkins/minikube-integration/19373-9606/.minikube/machines/ha-044175-m03 ...
	I0805 23:12:16.163439   28839 main.go:141] libmachine: (ha-044175-m03) Building disk image from file:///home/jenkins/minikube-integration/19373-9606/.minikube/cache/iso/amd64/minikube-v1.33.1-1722248113-19339-amd64.iso
	I0805 23:12:16.163516   28839 main.go:141] libmachine: (ha-044175-m03) DBG | I0805 23:12:16.163402   29643 common.go:145] Making disk image using store path: /home/jenkins/minikube-integration/19373-9606/.minikube
	I0805 23:12:16.163576   28839 main.go:141] libmachine: (ha-044175-m03) Downloading /home/jenkins/minikube-integration/19373-9606/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/19373-9606/.minikube/cache/iso/amd64/minikube-v1.33.1-1722248113-19339-amd64.iso...
	I0805 23:12:16.391616   28839 main.go:141] libmachine: (ha-044175-m03) DBG | I0805 23:12:16.391460   29643 common.go:152] Creating ssh key: /home/jenkins/minikube-integration/19373-9606/.minikube/machines/ha-044175-m03/id_rsa...
	I0805 23:12:16.494948   28839 main.go:141] libmachine: (ha-044175-m03) DBG | I0805 23:12:16.494820   29643 common.go:158] Creating raw disk image: /home/jenkins/minikube-integration/19373-9606/.minikube/machines/ha-044175-m03/ha-044175-m03.rawdisk...
	I0805 23:12:16.494984   28839 main.go:141] libmachine: (ha-044175-m03) DBG | Writing magic tar header
	I0805 23:12:16.494998   28839 main.go:141] libmachine: (ha-044175-m03) DBG | Writing SSH key tar header
	I0805 23:12:16.495009   28839 main.go:141] libmachine: (ha-044175-m03) DBG | I0805 23:12:16.494927   29643 common.go:172] Fixing permissions on /home/jenkins/minikube-integration/19373-9606/.minikube/machines/ha-044175-m03 ...
	I0805 23:12:16.495025   28839 main.go:141] libmachine: (ha-044175-m03) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19373-9606/.minikube/machines/ha-044175-m03
	I0805 23:12:16.495073   28839 main.go:141] libmachine: (ha-044175-m03) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19373-9606/.minikube/machines
	I0805 23:12:16.495090   28839 main.go:141] libmachine: (ha-044175-m03) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19373-9606/.minikube
	I0805 23:12:16.495102   28839 main.go:141] libmachine: (ha-044175-m03) Setting executable bit set on /home/jenkins/minikube-integration/19373-9606/.minikube/machines/ha-044175-m03 (perms=drwx------)
	I0805 23:12:16.495136   28839 main.go:141] libmachine: (ha-044175-m03) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19373-9606
	I0805 23:12:16.495158   28839 main.go:141] libmachine: (ha-044175-m03) DBG | Checking permissions on dir: /home/jenkins/minikube-integration
	I0805 23:12:16.495172   28839 main.go:141] libmachine: (ha-044175-m03) Setting executable bit set on /home/jenkins/minikube-integration/19373-9606/.minikube/machines (perms=drwxr-xr-x)
	I0805 23:12:16.495197   28839 main.go:141] libmachine: (ha-044175-m03) Setting executable bit set on /home/jenkins/minikube-integration/19373-9606/.minikube (perms=drwxr-xr-x)
	I0805 23:12:16.495212   28839 main.go:141] libmachine: (ha-044175-m03) Setting executable bit set on /home/jenkins/minikube-integration/19373-9606 (perms=drwxrwxr-x)
	I0805 23:12:16.495227   28839 main.go:141] libmachine: (ha-044175-m03) Setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I0805 23:12:16.495242   28839 main.go:141] libmachine: (ha-044175-m03) Setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I0805 23:12:16.495255   28839 main.go:141] libmachine: (ha-044175-m03) DBG | Checking permissions on dir: /home/jenkins
	I0805 23:12:16.495266   28839 main.go:141] libmachine: (ha-044175-m03) Creating domain...
	I0805 23:12:16.495279   28839 main.go:141] libmachine: (ha-044175-m03) DBG | Checking permissions on dir: /home
	I0805 23:12:16.495296   28839 main.go:141] libmachine: (ha-044175-m03) DBG | Skipping /home - not owner
	I0805 23:12:16.496167   28839 main.go:141] libmachine: (ha-044175-m03) define libvirt domain using xml: 
	I0805 23:12:16.496204   28839 main.go:141] libmachine: (ha-044175-m03) <domain type='kvm'>
	I0805 23:12:16.496219   28839 main.go:141] libmachine: (ha-044175-m03)   <name>ha-044175-m03</name>
	I0805 23:12:16.496232   28839 main.go:141] libmachine: (ha-044175-m03)   <memory unit='MiB'>2200</memory>
	I0805 23:12:16.496243   28839 main.go:141] libmachine: (ha-044175-m03)   <vcpu>2</vcpu>
	I0805 23:12:16.496253   28839 main.go:141] libmachine: (ha-044175-m03)   <features>
	I0805 23:12:16.496262   28839 main.go:141] libmachine: (ha-044175-m03)     <acpi/>
	I0805 23:12:16.496276   28839 main.go:141] libmachine: (ha-044175-m03)     <apic/>
	I0805 23:12:16.496288   28839 main.go:141] libmachine: (ha-044175-m03)     <pae/>
	I0805 23:12:16.496297   28839 main.go:141] libmachine: (ha-044175-m03)     
	I0805 23:12:16.496307   28839 main.go:141] libmachine: (ha-044175-m03)   </features>
	I0805 23:12:16.496316   28839 main.go:141] libmachine: (ha-044175-m03)   <cpu mode='host-passthrough'>
	I0805 23:12:16.496327   28839 main.go:141] libmachine: (ha-044175-m03)   
	I0805 23:12:16.496335   28839 main.go:141] libmachine: (ha-044175-m03)   </cpu>
	I0805 23:12:16.496366   28839 main.go:141] libmachine: (ha-044175-m03)   <os>
	I0805 23:12:16.496387   28839 main.go:141] libmachine: (ha-044175-m03)     <type>hvm</type>
	I0805 23:12:16.496411   28839 main.go:141] libmachine: (ha-044175-m03)     <boot dev='cdrom'/>
	I0805 23:12:16.496427   28839 main.go:141] libmachine: (ha-044175-m03)     <boot dev='hd'/>
	I0805 23:12:16.496443   28839 main.go:141] libmachine: (ha-044175-m03)     <bootmenu enable='no'/>
	I0805 23:12:16.496459   28839 main.go:141] libmachine: (ha-044175-m03)   </os>
	I0805 23:12:16.496471   28839 main.go:141] libmachine: (ha-044175-m03)   <devices>
	I0805 23:12:16.496482   28839 main.go:141] libmachine: (ha-044175-m03)     <disk type='file' device='cdrom'>
	I0805 23:12:16.496497   28839 main.go:141] libmachine: (ha-044175-m03)       <source file='/home/jenkins/minikube-integration/19373-9606/.minikube/machines/ha-044175-m03/boot2docker.iso'/>
	I0805 23:12:16.496510   28839 main.go:141] libmachine: (ha-044175-m03)       <target dev='hdc' bus='scsi'/>
	I0805 23:12:16.496523   28839 main.go:141] libmachine: (ha-044175-m03)       <readonly/>
	I0805 23:12:16.496537   28839 main.go:141] libmachine: (ha-044175-m03)     </disk>
	I0805 23:12:16.496551   28839 main.go:141] libmachine: (ha-044175-m03)     <disk type='file' device='disk'>
	I0805 23:12:16.496566   28839 main.go:141] libmachine: (ha-044175-m03)       <driver name='qemu' type='raw' cache='default' io='threads' />
	I0805 23:12:16.496582   28839 main.go:141] libmachine: (ha-044175-m03)       <source file='/home/jenkins/minikube-integration/19373-9606/.minikube/machines/ha-044175-m03/ha-044175-m03.rawdisk'/>
	I0805 23:12:16.496593   28839 main.go:141] libmachine: (ha-044175-m03)       <target dev='hda' bus='virtio'/>
	I0805 23:12:16.496607   28839 main.go:141] libmachine: (ha-044175-m03)     </disk>
	I0805 23:12:16.496624   28839 main.go:141] libmachine: (ha-044175-m03)     <interface type='network'>
	I0805 23:12:16.496639   28839 main.go:141] libmachine: (ha-044175-m03)       <source network='mk-ha-044175'/>
	I0805 23:12:16.496649   28839 main.go:141] libmachine: (ha-044175-m03)       <model type='virtio'/>
	I0805 23:12:16.496659   28839 main.go:141] libmachine: (ha-044175-m03)     </interface>
	I0805 23:12:16.496667   28839 main.go:141] libmachine: (ha-044175-m03)     <interface type='network'>
	I0805 23:12:16.496673   28839 main.go:141] libmachine: (ha-044175-m03)       <source network='default'/>
	I0805 23:12:16.496682   28839 main.go:141] libmachine: (ha-044175-m03)       <model type='virtio'/>
	I0805 23:12:16.496694   28839 main.go:141] libmachine: (ha-044175-m03)     </interface>
	I0805 23:12:16.496706   28839 main.go:141] libmachine: (ha-044175-m03)     <serial type='pty'>
	I0805 23:12:16.496718   28839 main.go:141] libmachine: (ha-044175-m03)       <target port='0'/>
	I0805 23:12:16.496729   28839 main.go:141] libmachine: (ha-044175-m03)     </serial>
	I0805 23:12:16.496740   28839 main.go:141] libmachine: (ha-044175-m03)     <console type='pty'>
	I0805 23:12:16.496750   28839 main.go:141] libmachine: (ha-044175-m03)       <target type='serial' port='0'/>
	I0805 23:12:16.496760   28839 main.go:141] libmachine: (ha-044175-m03)     </console>
	I0805 23:12:16.496771   28839 main.go:141] libmachine: (ha-044175-m03)     <rng model='virtio'>
	I0805 23:12:16.496788   28839 main.go:141] libmachine: (ha-044175-m03)       <backend model='random'>/dev/random</backend>
	I0805 23:12:16.496805   28839 main.go:141] libmachine: (ha-044175-m03)     </rng>
	I0805 23:12:16.496817   28839 main.go:141] libmachine: (ha-044175-m03)     
	I0805 23:12:16.496822   28839 main.go:141] libmachine: (ha-044175-m03)     
	I0805 23:12:16.496833   28839 main.go:141] libmachine: (ha-044175-m03)   </devices>
	I0805 23:12:16.496842   28839 main.go:141] libmachine: (ha-044175-m03) </domain>
	I0805 23:12:16.496852   28839 main.go:141] libmachine: (ha-044175-m03) 
	I0805 23:12:16.503725   28839 main.go:141] libmachine: (ha-044175-m03) DBG | domain ha-044175-m03 has defined MAC address 52:54:00:6b:ba:6d in network default
	I0805 23:12:16.504450   28839 main.go:141] libmachine: (ha-044175-m03) Ensuring networks are active...
	I0805 23:12:16.504492   28839 main.go:141] libmachine: (ha-044175-m03) DBG | domain ha-044175-m03 has defined MAC address 52:54:00:f4:37:04 in network mk-ha-044175
	I0805 23:12:16.505339   28839 main.go:141] libmachine: (ha-044175-m03) Ensuring network default is active
	I0805 23:12:16.505649   28839 main.go:141] libmachine: (ha-044175-m03) Ensuring network mk-ha-044175 is active
	I0805 23:12:16.506103   28839 main.go:141] libmachine: (ha-044175-m03) Getting domain xml...
	I0805 23:12:16.506891   28839 main.go:141] libmachine: (ha-044175-m03) Creating domain...
	I0805 23:12:17.726625   28839 main.go:141] libmachine: (ha-044175-m03) Waiting to get IP...
	I0805 23:12:17.727449   28839 main.go:141] libmachine: (ha-044175-m03) DBG | domain ha-044175-m03 has defined MAC address 52:54:00:f4:37:04 in network mk-ha-044175
	I0805 23:12:17.727898   28839 main.go:141] libmachine: (ha-044175-m03) DBG | unable to find current IP address of domain ha-044175-m03 in network mk-ha-044175
	I0805 23:12:17.727926   28839 main.go:141] libmachine: (ha-044175-m03) DBG | I0805 23:12:17.727866   29643 retry.go:31] will retry after 203.767559ms: waiting for machine to come up
	I0805 23:12:17.933384   28839 main.go:141] libmachine: (ha-044175-m03) DBG | domain ha-044175-m03 has defined MAC address 52:54:00:f4:37:04 in network mk-ha-044175
	I0805 23:12:17.933880   28839 main.go:141] libmachine: (ha-044175-m03) DBG | unable to find current IP address of domain ha-044175-m03 in network mk-ha-044175
	I0805 23:12:17.933902   28839 main.go:141] libmachine: (ha-044175-m03) DBG | I0805 23:12:17.933844   29643 retry.go:31] will retry after 239.798979ms: waiting for machine to come up
	I0805 23:12:18.175419   28839 main.go:141] libmachine: (ha-044175-m03) DBG | domain ha-044175-m03 has defined MAC address 52:54:00:f4:37:04 in network mk-ha-044175
	I0805 23:12:18.175845   28839 main.go:141] libmachine: (ha-044175-m03) DBG | unable to find current IP address of domain ha-044175-m03 in network mk-ha-044175
	I0805 23:12:18.175870   28839 main.go:141] libmachine: (ha-044175-m03) DBG | I0805 23:12:18.175792   29643 retry.go:31] will retry after 326.454439ms: waiting for machine to come up
	I0805 23:12:18.504326   28839 main.go:141] libmachine: (ha-044175-m03) DBG | domain ha-044175-m03 has defined MAC address 52:54:00:f4:37:04 in network mk-ha-044175
	I0805 23:12:18.504792   28839 main.go:141] libmachine: (ha-044175-m03) DBG | unable to find current IP address of domain ha-044175-m03 in network mk-ha-044175
	I0805 23:12:18.504831   28839 main.go:141] libmachine: (ha-044175-m03) DBG | I0805 23:12:18.504766   29643 retry.go:31] will retry after 426.319717ms: waiting for machine to come up
	I0805 23:12:18.932425   28839 main.go:141] libmachine: (ha-044175-m03) DBG | domain ha-044175-m03 has defined MAC address 52:54:00:f4:37:04 in network mk-ha-044175
	I0805 23:12:18.932894   28839 main.go:141] libmachine: (ha-044175-m03) DBG | unable to find current IP address of domain ha-044175-m03 in network mk-ha-044175
	I0805 23:12:18.932928   28839 main.go:141] libmachine: (ha-044175-m03) DBG | I0805 23:12:18.932825   29643 retry.go:31] will retry after 613.530654ms: waiting for machine to come up
	I0805 23:12:19.547501   28839 main.go:141] libmachine: (ha-044175-m03) DBG | domain ha-044175-m03 has defined MAC address 52:54:00:f4:37:04 in network mk-ha-044175
	I0805 23:12:19.547980   28839 main.go:141] libmachine: (ha-044175-m03) DBG | unable to find current IP address of domain ha-044175-m03 in network mk-ha-044175
	I0805 23:12:19.548048   28839 main.go:141] libmachine: (ha-044175-m03) DBG | I0805 23:12:19.547951   29643 retry.go:31] will retry after 668.13083ms: waiting for machine to come up
	I0805 23:12:20.217948   28839 main.go:141] libmachine: (ha-044175-m03) DBG | domain ha-044175-m03 has defined MAC address 52:54:00:f4:37:04 in network mk-ha-044175
	I0805 23:12:20.218511   28839 main.go:141] libmachine: (ha-044175-m03) DBG | unable to find current IP address of domain ha-044175-m03 in network mk-ha-044175
	I0805 23:12:20.218535   28839 main.go:141] libmachine: (ha-044175-m03) DBG | I0805 23:12:20.218463   29643 retry.go:31] will retry after 1.100630535s: waiting for machine to come up
	I0805 23:12:21.320924   28839 main.go:141] libmachine: (ha-044175-m03) DBG | domain ha-044175-m03 has defined MAC address 52:54:00:f4:37:04 in network mk-ha-044175
	I0805 23:12:21.321377   28839 main.go:141] libmachine: (ha-044175-m03) DBG | unable to find current IP address of domain ha-044175-m03 in network mk-ha-044175
	I0805 23:12:21.321401   28839 main.go:141] libmachine: (ha-044175-m03) DBG | I0805 23:12:21.321295   29643 retry.go:31] will retry after 1.235967589s: waiting for machine to come up
	I0805 23:12:22.558632   28839 main.go:141] libmachine: (ha-044175-m03) DBG | domain ha-044175-m03 has defined MAC address 52:54:00:f4:37:04 in network mk-ha-044175
	I0805 23:12:22.559094   28839 main.go:141] libmachine: (ha-044175-m03) DBG | unable to find current IP address of domain ha-044175-m03 in network mk-ha-044175
	I0805 23:12:22.559115   28839 main.go:141] libmachine: (ha-044175-m03) DBG | I0805 23:12:22.559042   29643 retry.go:31] will retry after 1.216988644s: waiting for machine to come up
	I0805 23:12:23.777210   28839 main.go:141] libmachine: (ha-044175-m03) DBG | domain ha-044175-m03 has defined MAC address 52:54:00:f4:37:04 in network mk-ha-044175
	I0805 23:12:23.777638   28839 main.go:141] libmachine: (ha-044175-m03) DBG | unable to find current IP address of domain ha-044175-m03 in network mk-ha-044175
	I0805 23:12:23.777663   28839 main.go:141] libmachine: (ha-044175-m03) DBG | I0805 23:12:23.777586   29643 retry.go:31] will retry after 2.095063584s: waiting for machine to come up
	I0805 23:12:25.875961   28839 main.go:141] libmachine: (ha-044175-m03) DBG | domain ha-044175-m03 has defined MAC address 52:54:00:f4:37:04 in network mk-ha-044175
	I0805 23:12:25.876431   28839 main.go:141] libmachine: (ha-044175-m03) DBG | unable to find current IP address of domain ha-044175-m03 in network mk-ha-044175
	I0805 23:12:25.876456   28839 main.go:141] libmachine: (ha-044175-m03) DBG | I0805 23:12:25.876393   29643 retry.go:31] will retry after 1.975393786s: waiting for machine to come up
	I0805 23:12:27.853735   28839 main.go:141] libmachine: (ha-044175-m03) DBG | domain ha-044175-m03 has defined MAC address 52:54:00:f4:37:04 in network mk-ha-044175
	I0805 23:12:27.854234   28839 main.go:141] libmachine: (ha-044175-m03) DBG | unable to find current IP address of domain ha-044175-m03 in network mk-ha-044175
	I0805 23:12:27.854259   28839 main.go:141] libmachine: (ha-044175-m03) DBG | I0805 23:12:27.854195   29643 retry.go:31] will retry after 2.248104101s: waiting for machine to come up
	I0805 23:12:30.103437   28839 main.go:141] libmachine: (ha-044175-m03) DBG | domain ha-044175-m03 has defined MAC address 52:54:00:f4:37:04 in network mk-ha-044175
	I0805 23:12:30.103846   28839 main.go:141] libmachine: (ha-044175-m03) DBG | unable to find current IP address of domain ha-044175-m03 in network mk-ha-044175
	I0805 23:12:30.103861   28839 main.go:141] libmachine: (ha-044175-m03) DBG | I0805 23:12:30.103817   29643 retry.go:31] will retry after 2.931156145s: waiting for machine to come up
	I0805 23:12:33.036613   28839 main.go:141] libmachine: (ha-044175-m03) DBG | domain ha-044175-m03 has defined MAC address 52:54:00:f4:37:04 in network mk-ha-044175
	I0805 23:12:33.037025   28839 main.go:141] libmachine: (ha-044175-m03) DBG | unable to find current IP address of domain ha-044175-m03 in network mk-ha-044175
	I0805 23:12:33.037049   28839 main.go:141] libmachine: (ha-044175-m03) DBG | I0805 23:12:33.036982   29643 retry.go:31] will retry after 4.276164676s: waiting for machine to come up
	I0805 23:12:37.314725   28839 main.go:141] libmachine: (ha-044175-m03) DBG | domain ha-044175-m03 has defined MAC address 52:54:00:f4:37:04 in network mk-ha-044175
	I0805 23:12:37.315250   28839 main.go:141] libmachine: (ha-044175-m03) Found IP for machine: 192.168.39.201
	I0805 23:12:37.315282   28839 main.go:141] libmachine: (ha-044175-m03) DBG | domain ha-044175-m03 has current primary IP address 192.168.39.201 and MAC address 52:54:00:f4:37:04 in network mk-ha-044175
	I0805 23:12:37.315293   28839 main.go:141] libmachine: (ha-044175-m03) Reserving static IP address...
	I0805 23:12:37.315644   28839 main.go:141] libmachine: (ha-044175-m03) DBG | unable to find host DHCP lease matching {name: "ha-044175-m03", mac: "52:54:00:f4:37:04", ip: "192.168.39.201"} in network mk-ha-044175
	I0805 23:12:37.392179   28839 main.go:141] libmachine: (ha-044175-m03) DBG | Getting to WaitForSSH function...
	I0805 23:12:37.392213   28839 main.go:141] libmachine: (ha-044175-m03) Reserved static IP address: 192.168.39.201
	I0805 23:12:37.392225   28839 main.go:141] libmachine: (ha-044175-m03) Waiting for SSH to be available...
	I0805 23:12:37.395001   28839 main.go:141] libmachine: (ha-044175-m03) DBG | domain ha-044175-m03 has defined MAC address 52:54:00:f4:37:04 in network mk-ha-044175
	I0805 23:12:37.395500   28839 main.go:141] libmachine: (ha-044175-m03) DBG | unable to find host DHCP lease matching {name: "", mac: "52:54:00:f4:37:04", ip: ""} in network mk-ha-044175
	I0805 23:12:37.395530   28839 main.go:141] libmachine: (ha-044175-m03) DBG | unable to find defined IP address of network mk-ha-044175 interface with MAC address 52:54:00:f4:37:04
	I0805 23:12:37.395654   28839 main.go:141] libmachine: (ha-044175-m03) DBG | Using SSH client type: external
	I0805 23:12:37.395676   28839 main.go:141] libmachine: (ha-044175-m03) DBG | Using SSH private key: /home/jenkins/minikube-integration/19373-9606/.minikube/machines/ha-044175-m03/id_rsa (-rw-------)
	I0805 23:12:37.395706   28839 main.go:141] libmachine: (ha-044175-m03) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@ -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19373-9606/.minikube/machines/ha-044175-m03/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0805 23:12:37.395720   28839 main.go:141] libmachine: (ha-044175-m03) DBG | About to run SSH command:
	I0805 23:12:37.395738   28839 main.go:141] libmachine: (ha-044175-m03) DBG | exit 0
	I0805 23:12:37.399962   28839 main.go:141] libmachine: (ha-044175-m03) DBG | SSH cmd err, output: exit status 255: 
	I0805 23:12:37.399985   28839 main.go:141] libmachine: (ha-044175-m03) DBG | Error getting ssh command 'exit 0' : ssh command error:
	I0805 23:12:37.399996   28839 main.go:141] libmachine: (ha-044175-m03) DBG | command : exit 0
	I0805 23:12:37.400003   28839 main.go:141] libmachine: (ha-044175-m03) DBG | err     : exit status 255
	I0805 23:12:37.400016   28839 main.go:141] libmachine: (ha-044175-m03) DBG | output  : 
	I0805 23:12:40.400584   28839 main.go:141] libmachine: (ha-044175-m03) DBG | Getting to WaitForSSH function...
	I0805 23:12:40.403127   28839 main.go:141] libmachine: (ha-044175-m03) DBG | domain ha-044175-m03 has defined MAC address 52:54:00:f4:37:04 in network mk-ha-044175
	I0805 23:12:40.403457   28839 main.go:141] libmachine: (ha-044175-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f4:37:04", ip: ""} in network mk-ha-044175: {Iface:virbr1 ExpiryTime:2024-08-06 00:12:31 +0000 UTC Type:0 Mac:52:54:00:f4:37:04 Iaid: IPaddr:192.168.39.201 Prefix:24 Hostname:ha-044175-m03 Clientid:01:52:54:00:f4:37:04}
	I0805 23:12:40.403486   28839 main.go:141] libmachine: (ha-044175-m03) DBG | domain ha-044175-m03 has defined IP address 192.168.39.201 and MAC address 52:54:00:f4:37:04 in network mk-ha-044175
	I0805 23:12:40.403644   28839 main.go:141] libmachine: (ha-044175-m03) DBG | Using SSH client type: external
	I0805 23:12:40.403670   28839 main.go:141] libmachine: (ha-044175-m03) DBG | Using SSH private key: /home/jenkins/minikube-integration/19373-9606/.minikube/machines/ha-044175-m03/id_rsa (-rw-------)
	I0805 23:12:40.403700   28839 main.go:141] libmachine: (ha-044175-m03) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.201 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19373-9606/.minikube/machines/ha-044175-m03/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0805 23:12:40.403711   28839 main.go:141] libmachine: (ha-044175-m03) DBG | About to run SSH command:
	I0805 23:12:40.403720   28839 main.go:141] libmachine: (ha-044175-m03) DBG | exit 0
	I0805 23:12:40.531190   28839 main.go:141] libmachine: (ha-044175-m03) DBG | SSH cmd err, output: <nil>: 
	I0805 23:12:40.531412   28839 main.go:141] libmachine: (ha-044175-m03) KVM machine creation complete!
	I0805 23:12:40.531711   28839 main.go:141] libmachine: (ha-044175-m03) Calling .GetConfigRaw
	I0805 23:12:40.532231   28839 main.go:141] libmachine: (ha-044175-m03) Calling .DriverName
	I0805 23:12:40.532423   28839 main.go:141] libmachine: (ha-044175-m03) Calling .DriverName
	I0805 23:12:40.532552   28839 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I0805 23:12:40.532567   28839 main.go:141] libmachine: (ha-044175-m03) Calling .GetState
	I0805 23:12:40.533849   28839 main.go:141] libmachine: Detecting operating system of created instance...
	I0805 23:12:40.533868   28839 main.go:141] libmachine: Waiting for SSH to be available...
	I0805 23:12:40.533882   28839 main.go:141] libmachine: Getting to WaitForSSH function...
	I0805 23:12:40.533890   28839 main.go:141] libmachine: (ha-044175-m03) Calling .GetSSHHostname
	I0805 23:12:40.536165   28839 main.go:141] libmachine: (ha-044175-m03) DBG | domain ha-044175-m03 has defined MAC address 52:54:00:f4:37:04 in network mk-ha-044175
	I0805 23:12:40.536495   28839 main.go:141] libmachine: (ha-044175-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f4:37:04", ip: ""} in network mk-ha-044175: {Iface:virbr1 ExpiryTime:2024-08-06 00:12:31 +0000 UTC Type:0 Mac:52:54:00:f4:37:04 Iaid: IPaddr:192.168.39.201 Prefix:24 Hostname:ha-044175-m03 Clientid:01:52:54:00:f4:37:04}
	I0805 23:12:40.536523   28839 main.go:141] libmachine: (ha-044175-m03) DBG | domain ha-044175-m03 has defined IP address 192.168.39.201 and MAC address 52:54:00:f4:37:04 in network mk-ha-044175
	I0805 23:12:40.536614   28839 main.go:141] libmachine: (ha-044175-m03) Calling .GetSSHPort
	I0805 23:12:40.536821   28839 main.go:141] libmachine: (ha-044175-m03) Calling .GetSSHKeyPath
	I0805 23:12:40.536963   28839 main.go:141] libmachine: (ha-044175-m03) Calling .GetSSHKeyPath
	I0805 23:12:40.537111   28839 main.go:141] libmachine: (ha-044175-m03) Calling .GetSSHUsername
	I0805 23:12:40.537274   28839 main.go:141] libmachine: Using SSH client type: native
	I0805 23:12:40.537507   28839 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.201 22 <nil> <nil>}
	I0805 23:12:40.537518   28839 main.go:141] libmachine: About to run SSH command:
	exit 0
	I0805 23:12:40.650728   28839 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0805 23:12:40.650757   28839 main.go:141] libmachine: Detecting the provisioner...
	I0805 23:12:40.650767   28839 main.go:141] libmachine: (ha-044175-m03) Calling .GetSSHHostname
	I0805 23:12:40.653865   28839 main.go:141] libmachine: (ha-044175-m03) DBG | domain ha-044175-m03 has defined MAC address 52:54:00:f4:37:04 in network mk-ha-044175
	I0805 23:12:40.654319   28839 main.go:141] libmachine: (ha-044175-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f4:37:04", ip: ""} in network mk-ha-044175: {Iface:virbr1 ExpiryTime:2024-08-06 00:12:31 +0000 UTC Type:0 Mac:52:54:00:f4:37:04 Iaid: IPaddr:192.168.39.201 Prefix:24 Hostname:ha-044175-m03 Clientid:01:52:54:00:f4:37:04}
	I0805 23:12:40.654350   28839 main.go:141] libmachine: (ha-044175-m03) DBG | domain ha-044175-m03 has defined IP address 192.168.39.201 and MAC address 52:54:00:f4:37:04 in network mk-ha-044175
	I0805 23:12:40.654464   28839 main.go:141] libmachine: (ha-044175-m03) Calling .GetSSHPort
	I0805 23:12:40.654709   28839 main.go:141] libmachine: (ha-044175-m03) Calling .GetSSHKeyPath
	I0805 23:12:40.654910   28839 main.go:141] libmachine: (ha-044175-m03) Calling .GetSSHKeyPath
	I0805 23:12:40.655085   28839 main.go:141] libmachine: (ha-044175-m03) Calling .GetSSHUsername
	I0805 23:12:40.655267   28839 main.go:141] libmachine: Using SSH client type: native
	I0805 23:12:40.655468   28839 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.201 22 <nil> <nil>}
	I0805 23:12:40.655484   28839 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I0805 23:12:40.772114   28839 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2023.02.9-dirty
	ID=buildroot
	VERSION_ID=2023.02.9
	PRETTY_NAME="Buildroot 2023.02.9"
	
	I0805 23:12:40.772237   28839 main.go:141] libmachine: found compatible host: buildroot
	I0805 23:12:40.772248   28839 main.go:141] libmachine: Provisioning with buildroot...
	I0805 23:12:40.772255   28839 main.go:141] libmachine: (ha-044175-m03) Calling .GetMachineName
	I0805 23:12:40.772507   28839 buildroot.go:166] provisioning hostname "ha-044175-m03"
	I0805 23:12:40.772535   28839 main.go:141] libmachine: (ha-044175-m03) Calling .GetMachineName
	I0805 23:12:40.772738   28839 main.go:141] libmachine: (ha-044175-m03) Calling .GetSSHHostname
	I0805 23:12:40.775382   28839 main.go:141] libmachine: (ha-044175-m03) DBG | domain ha-044175-m03 has defined MAC address 52:54:00:f4:37:04 in network mk-ha-044175
	I0805 23:12:40.775748   28839 main.go:141] libmachine: (ha-044175-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f4:37:04", ip: ""} in network mk-ha-044175: {Iface:virbr1 ExpiryTime:2024-08-06 00:12:31 +0000 UTC Type:0 Mac:52:54:00:f4:37:04 Iaid: IPaddr:192.168.39.201 Prefix:24 Hostname:ha-044175-m03 Clientid:01:52:54:00:f4:37:04}
	I0805 23:12:40.775776   28839 main.go:141] libmachine: (ha-044175-m03) DBG | domain ha-044175-m03 has defined IP address 192.168.39.201 and MAC address 52:54:00:f4:37:04 in network mk-ha-044175
	I0805 23:12:40.776000   28839 main.go:141] libmachine: (ha-044175-m03) Calling .GetSSHPort
	I0805 23:12:40.776189   28839 main.go:141] libmachine: (ha-044175-m03) Calling .GetSSHKeyPath
	I0805 23:12:40.776350   28839 main.go:141] libmachine: (ha-044175-m03) Calling .GetSSHKeyPath
	I0805 23:12:40.776492   28839 main.go:141] libmachine: (ha-044175-m03) Calling .GetSSHUsername
	I0805 23:12:40.776662   28839 main.go:141] libmachine: Using SSH client type: native
	I0805 23:12:40.776820   28839 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.201 22 <nil> <nil>}
	I0805 23:12:40.776832   28839 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-044175-m03 && echo "ha-044175-m03" | sudo tee /etc/hostname
	I0805 23:12:40.911933   28839 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-044175-m03
	
	I0805 23:12:40.911968   28839 main.go:141] libmachine: (ha-044175-m03) Calling .GetSSHHostname
	I0805 23:12:40.914562   28839 main.go:141] libmachine: (ha-044175-m03) DBG | domain ha-044175-m03 has defined MAC address 52:54:00:f4:37:04 in network mk-ha-044175
	I0805 23:12:40.914922   28839 main.go:141] libmachine: (ha-044175-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f4:37:04", ip: ""} in network mk-ha-044175: {Iface:virbr1 ExpiryTime:2024-08-06 00:12:31 +0000 UTC Type:0 Mac:52:54:00:f4:37:04 Iaid: IPaddr:192.168.39.201 Prefix:24 Hostname:ha-044175-m03 Clientid:01:52:54:00:f4:37:04}
	I0805 23:12:40.914947   28839 main.go:141] libmachine: (ha-044175-m03) DBG | domain ha-044175-m03 has defined IP address 192.168.39.201 and MAC address 52:54:00:f4:37:04 in network mk-ha-044175
	I0805 23:12:40.915149   28839 main.go:141] libmachine: (ha-044175-m03) Calling .GetSSHPort
	I0805 23:12:40.915309   28839 main.go:141] libmachine: (ha-044175-m03) Calling .GetSSHKeyPath
	I0805 23:12:40.915474   28839 main.go:141] libmachine: (ha-044175-m03) Calling .GetSSHKeyPath
	I0805 23:12:40.915606   28839 main.go:141] libmachine: (ha-044175-m03) Calling .GetSSHUsername
	I0805 23:12:40.915749   28839 main.go:141] libmachine: Using SSH client type: native
	I0805 23:12:40.915943   28839 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.201 22 <nil> <nil>}
	I0805 23:12:40.915961   28839 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-044175-m03' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-044175-m03/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-044175-m03' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0805 23:12:41.040816   28839 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0805 23:12:41.040846   28839 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19373-9606/.minikube CaCertPath:/home/jenkins/minikube-integration/19373-9606/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19373-9606/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19373-9606/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19373-9606/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19373-9606/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19373-9606/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19373-9606/.minikube}
	I0805 23:12:41.040866   28839 buildroot.go:174] setting up certificates
	I0805 23:12:41.040880   28839 provision.go:84] configureAuth start
	I0805 23:12:41.040894   28839 main.go:141] libmachine: (ha-044175-m03) Calling .GetMachineName
	I0805 23:12:41.041154   28839 main.go:141] libmachine: (ha-044175-m03) Calling .GetIP
	I0805 23:12:41.043913   28839 main.go:141] libmachine: (ha-044175-m03) DBG | domain ha-044175-m03 has defined MAC address 52:54:00:f4:37:04 in network mk-ha-044175
	I0805 23:12:41.044351   28839 main.go:141] libmachine: (ha-044175-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f4:37:04", ip: ""} in network mk-ha-044175: {Iface:virbr1 ExpiryTime:2024-08-06 00:12:31 +0000 UTC Type:0 Mac:52:54:00:f4:37:04 Iaid: IPaddr:192.168.39.201 Prefix:24 Hostname:ha-044175-m03 Clientid:01:52:54:00:f4:37:04}
	I0805 23:12:41.044378   28839 main.go:141] libmachine: (ha-044175-m03) DBG | domain ha-044175-m03 has defined IP address 192.168.39.201 and MAC address 52:54:00:f4:37:04 in network mk-ha-044175
	I0805 23:12:41.044514   28839 main.go:141] libmachine: (ha-044175-m03) Calling .GetSSHHostname
	I0805 23:12:41.046897   28839 main.go:141] libmachine: (ha-044175-m03) DBG | domain ha-044175-m03 has defined MAC address 52:54:00:f4:37:04 in network mk-ha-044175
	I0805 23:12:41.047336   28839 main.go:141] libmachine: (ha-044175-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f4:37:04", ip: ""} in network mk-ha-044175: {Iface:virbr1 ExpiryTime:2024-08-06 00:12:31 +0000 UTC Type:0 Mac:52:54:00:f4:37:04 Iaid: IPaddr:192.168.39.201 Prefix:24 Hostname:ha-044175-m03 Clientid:01:52:54:00:f4:37:04}
	I0805 23:12:41.047357   28839 main.go:141] libmachine: (ha-044175-m03) DBG | domain ha-044175-m03 has defined IP address 192.168.39.201 and MAC address 52:54:00:f4:37:04 in network mk-ha-044175
	I0805 23:12:41.047458   28839 provision.go:143] copyHostCerts
	I0805 23:12:41.047498   28839 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19373-9606/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/19373-9606/.minikube/key.pem
	I0805 23:12:41.047539   28839 exec_runner.go:144] found /home/jenkins/minikube-integration/19373-9606/.minikube/key.pem, removing ...
	I0805 23:12:41.047549   28839 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19373-9606/.minikube/key.pem
	I0805 23:12:41.047612   28839 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19373-9606/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19373-9606/.minikube/key.pem (1679 bytes)
	I0805 23:12:41.047691   28839 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19373-9606/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/19373-9606/.minikube/ca.pem
	I0805 23:12:41.047709   28839 exec_runner.go:144] found /home/jenkins/minikube-integration/19373-9606/.minikube/ca.pem, removing ...
	I0805 23:12:41.047716   28839 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19373-9606/.minikube/ca.pem
	I0805 23:12:41.047741   28839 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19373-9606/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19373-9606/.minikube/ca.pem (1082 bytes)
	I0805 23:12:41.047790   28839 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19373-9606/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/19373-9606/.minikube/cert.pem
	I0805 23:12:41.047812   28839 exec_runner.go:144] found /home/jenkins/minikube-integration/19373-9606/.minikube/cert.pem, removing ...
	I0805 23:12:41.047818   28839 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19373-9606/.minikube/cert.pem
	I0805 23:12:41.047842   28839 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19373-9606/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19373-9606/.minikube/cert.pem (1123 bytes)
	I0805 23:12:41.047913   28839 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19373-9606/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19373-9606/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19373-9606/.minikube/certs/ca-key.pem org=jenkins.ha-044175-m03 san=[127.0.0.1 192.168.39.201 ha-044175-m03 localhost minikube]
	I0805 23:12:41.135263   28839 provision.go:177] copyRemoteCerts
	I0805 23:12:41.135319   28839 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0805 23:12:41.135343   28839 main.go:141] libmachine: (ha-044175-m03) Calling .GetSSHHostname
	I0805 23:12:41.138088   28839 main.go:141] libmachine: (ha-044175-m03) DBG | domain ha-044175-m03 has defined MAC address 52:54:00:f4:37:04 in network mk-ha-044175
	I0805 23:12:41.138415   28839 main.go:141] libmachine: (ha-044175-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f4:37:04", ip: ""} in network mk-ha-044175: {Iface:virbr1 ExpiryTime:2024-08-06 00:12:31 +0000 UTC Type:0 Mac:52:54:00:f4:37:04 Iaid: IPaddr:192.168.39.201 Prefix:24 Hostname:ha-044175-m03 Clientid:01:52:54:00:f4:37:04}
	I0805 23:12:41.138443   28839 main.go:141] libmachine: (ha-044175-m03) DBG | domain ha-044175-m03 has defined IP address 192.168.39.201 and MAC address 52:54:00:f4:37:04 in network mk-ha-044175
	I0805 23:12:41.138639   28839 main.go:141] libmachine: (ha-044175-m03) Calling .GetSSHPort
	I0805 23:12:41.138865   28839 main.go:141] libmachine: (ha-044175-m03) Calling .GetSSHKeyPath
	I0805 23:12:41.139033   28839 main.go:141] libmachine: (ha-044175-m03) Calling .GetSSHUsername
	I0805 23:12:41.139251   28839 sshutil.go:53] new ssh client: &{IP:192.168.39.201 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19373-9606/.minikube/machines/ha-044175-m03/id_rsa Username:docker}
	I0805 23:12:41.229814   28839 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19373-9606/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0805 23:12:41.229892   28839 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19373-9606/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I0805 23:12:41.254889   28839 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19373-9606/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0805 23:12:41.254966   28839 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19373-9606/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0805 23:12:41.280662   28839 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19373-9606/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0805 23:12:41.280736   28839 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19373-9606/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0805 23:12:41.305856   28839 provision.go:87] duration metric: took 264.960326ms to configureAuth
	I0805 23:12:41.305887   28839 buildroot.go:189] setting minikube options for container-runtime
	I0805 23:12:41.306177   28839 config.go:182] Loaded profile config "ha-044175": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0805 23:12:41.306280   28839 main.go:141] libmachine: (ha-044175-m03) Calling .GetSSHHostname
	I0805 23:12:41.308968   28839 main.go:141] libmachine: (ha-044175-m03) DBG | domain ha-044175-m03 has defined MAC address 52:54:00:f4:37:04 in network mk-ha-044175
	I0805 23:12:41.309366   28839 main.go:141] libmachine: (ha-044175-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f4:37:04", ip: ""} in network mk-ha-044175: {Iface:virbr1 ExpiryTime:2024-08-06 00:12:31 +0000 UTC Type:0 Mac:52:54:00:f4:37:04 Iaid: IPaddr:192.168.39.201 Prefix:24 Hostname:ha-044175-m03 Clientid:01:52:54:00:f4:37:04}
	I0805 23:12:41.309395   28839 main.go:141] libmachine: (ha-044175-m03) DBG | domain ha-044175-m03 has defined IP address 192.168.39.201 and MAC address 52:54:00:f4:37:04 in network mk-ha-044175
	I0805 23:12:41.309569   28839 main.go:141] libmachine: (ha-044175-m03) Calling .GetSSHPort
	I0805 23:12:41.309760   28839 main.go:141] libmachine: (ha-044175-m03) Calling .GetSSHKeyPath
	I0805 23:12:41.309961   28839 main.go:141] libmachine: (ha-044175-m03) Calling .GetSSHKeyPath
	I0805 23:12:41.310100   28839 main.go:141] libmachine: (ha-044175-m03) Calling .GetSSHUsername
	I0805 23:12:41.310242   28839 main.go:141] libmachine: Using SSH client type: native
	I0805 23:12:41.310391   28839 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.201 22 <nil> <nil>}
	I0805 23:12:41.310405   28839 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0805 23:12:41.592819   28839 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0805 23:12:41.592845   28839 main.go:141] libmachine: Checking connection to Docker...
	I0805 23:12:41.592856   28839 main.go:141] libmachine: (ha-044175-m03) Calling .GetURL
	I0805 23:12:41.594183   28839 main.go:141] libmachine: (ha-044175-m03) DBG | Using libvirt version 6000000
	I0805 23:12:41.596828   28839 main.go:141] libmachine: (ha-044175-m03) DBG | domain ha-044175-m03 has defined MAC address 52:54:00:f4:37:04 in network mk-ha-044175
	I0805 23:12:41.597298   28839 main.go:141] libmachine: (ha-044175-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f4:37:04", ip: ""} in network mk-ha-044175: {Iface:virbr1 ExpiryTime:2024-08-06 00:12:31 +0000 UTC Type:0 Mac:52:54:00:f4:37:04 Iaid: IPaddr:192.168.39.201 Prefix:24 Hostname:ha-044175-m03 Clientid:01:52:54:00:f4:37:04}
	I0805 23:12:41.597325   28839 main.go:141] libmachine: (ha-044175-m03) DBG | domain ha-044175-m03 has defined IP address 192.168.39.201 and MAC address 52:54:00:f4:37:04 in network mk-ha-044175
	I0805 23:12:41.597478   28839 main.go:141] libmachine: Docker is up and running!
	I0805 23:12:41.597490   28839 main.go:141] libmachine: Reticulating splines...
	I0805 23:12:41.597497   28839 client.go:171] duration metric: took 25.436714553s to LocalClient.Create
	I0805 23:12:41.597524   28839 start.go:167] duration metric: took 25.436787614s to libmachine.API.Create "ha-044175"
	I0805 23:12:41.597536   28839 start.go:293] postStartSetup for "ha-044175-m03" (driver="kvm2")
	I0805 23:12:41.597556   28839 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0805 23:12:41.597571   28839 main.go:141] libmachine: (ha-044175-m03) Calling .DriverName
	I0805 23:12:41.597828   28839 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0805 23:12:41.597853   28839 main.go:141] libmachine: (ha-044175-m03) Calling .GetSSHHostname
	I0805 23:12:41.600379   28839 main.go:141] libmachine: (ha-044175-m03) DBG | domain ha-044175-m03 has defined MAC address 52:54:00:f4:37:04 in network mk-ha-044175
	I0805 23:12:41.600765   28839 main.go:141] libmachine: (ha-044175-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f4:37:04", ip: ""} in network mk-ha-044175: {Iface:virbr1 ExpiryTime:2024-08-06 00:12:31 +0000 UTC Type:0 Mac:52:54:00:f4:37:04 Iaid: IPaddr:192.168.39.201 Prefix:24 Hostname:ha-044175-m03 Clientid:01:52:54:00:f4:37:04}
	I0805 23:12:41.600788   28839 main.go:141] libmachine: (ha-044175-m03) DBG | domain ha-044175-m03 has defined IP address 192.168.39.201 and MAC address 52:54:00:f4:37:04 in network mk-ha-044175
	I0805 23:12:41.600950   28839 main.go:141] libmachine: (ha-044175-m03) Calling .GetSSHPort
	I0805 23:12:41.601183   28839 main.go:141] libmachine: (ha-044175-m03) Calling .GetSSHKeyPath
	I0805 23:12:41.601343   28839 main.go:141] libmachine: (ha-044175-m03) Calling .GetSSHUsername
	I0805 23:12:41.601470   28839 sshutil.go:53] new ssh client: &{IP:192.168.39.201 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19373-9606/.minikube/machines/ha-044175-m03/id_rsa Username:docker}
	I0805 23:12:41.690542   28839 ssh_runner.go:195] Run: cat /etc/os-release
	I0805 23:12:41.694912   28839 info.go:137] Remote host: Buildroot 2023.02.9
	I0805 23:12:41.694939   28839 filesync.go:126] Scanning /home/jenkins/minikube-integration/19373-9606/.minikube/addons for local assets ...
	I0805 23:12:41.695008   28839 filesync.go:126] Scanning /home/jenkins/minikube-integration/19373-9606/.minikube/files for local assets ...
	I0805 23:12:41.695114   28839 filesync.go:149] local asset: /home/jenkins/minikube-integration/19373-9606/.minikube/files/etc/ssl/certs/167922.pem -> 167922.pem in /etc/ssl/certs
	I0805 23:12:41.695129   28839 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19373-9606/.minikube/files/etc/ssl/certs/167922.pem -> /etc/ssl/certs/167922.pem
	I0805 23:12:41.695242   28839 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0805 23:12:41.705540   28839 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19373-9606/.minikube/files/etc/ssl/certs/167922.pem --> /etc/ssl/certs/167922.pem (1708 bytes)
	I0805 23:12:41.733699   28839 start.go:296] duration metric: took 136.142198ms for postStartSetup
	I0805 23:12:41.733756   28839 main.go:141] libmachine: (ha-044175-m03) Calling .GetConfigRaw
	I0805 23:12:41.734474   28839 main.go:141] libmachine: (ha-044175-m03) Calling .GetIP
	I0805 23:12:41.737105   28839 main.go:141] libmachine: (ha-044175-m03) DBG | domain ha-044175-m03 has defined MAC address 52:54:00:f4:37:04 in network mk-ha-044175
	I0805 23:12:41.737508   28839 main.go:141] libmachine: (ha-044175-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f4:37:04", ip: ""} in network mk-ha-044175: {Iface:virbr1 ExpiryTime:2024-08-06 00:12:31 +0000 UTC Type:0 Mac:52:54:00:f4:37:04 Iaid: IPaddr:192.168.39.201 Prefix:24 Hostname:ha-044175-m03 Clientid:01:52:54:00:f4:37:04}
	I0805 23:12:41.737530   28839 main.go:141] libmachine: (ha-044175-m03) DBG | domain ha-044175-m03 has defined IP address 192.168.39.201 and MAC address 52:54:00:f4:37:04 in network mk-ha-044175
	I0805 23:12:41.737826   28839 profile.go:143] Saving config to /home/jenkins/minikube-integration/19373-9606/.minikube/profiles/ha-044175/config.json ...
	I0805 23:12:41.738043   28839 start.go:128] duration metric: took 25.597496393s to createHost
	I0805 23:12:41.738069   28839 main.go:141] libmachine: (ha-044175-m03) Calling .GetSSHHostname
	I0805 23:12:41.740252   28839 main.go:141] libmachine: (ha-044175-m03) DBG | domain ha-044175-m03 has defined MAC address 52:54:00:f4:37:04 in network mk-ha-044175
	I0805 23:12:41.740581   28839 main.go:141] libmachine: (ha-044175-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f4:37:04", ip: ""} in network mk-ha-044175: {Iface:virbr1 ExpiryTime:2024-08-06 00:12:31 +0000 UTC Type:0 Mac:52:54:00:f4:37:04 Iaid: IPaddr:192.168.39.201 Prefix:24 Hostname:ha-044175-m03 Clientid:01:52:54:00:f4:37:04}
	I0805 23:12:41.740606   28839 main.go:141] libmachine: (ha-044175-m03) DBG | domain ha-044175-m03 has defined IP address 192.168.39.201 and MAC address 52:54:00:f4:37:04 in network mk-ha-044175
	I0805 23:12:41.740704   28839 main.go:141] libmachine: (ha-044175-m03) Calling .GetSSHPort
	I0805 23:12:41.740906   28839 main.go:141] libmachine: (ha-044175-m03) Calling .GetSSHKeyPath
	I0805 23:12:41.741078   28839 main.go:141] libmachine: (ha-044175-m03) Calling .GetSSHKeyPath
	I0805 23:12:41.741217   28839 main.go:141] libmachine: (ha-044175-m03) Calling .GetSSHUsername
	I0805 23:12:41.741374   28839 main.go:141] libmachine: Using SSH client type: native
	I0805 23:12:41.741544   28839 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.201 22 <nil> <nil>}
	I0805 23:12:41.741557   28839 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0805 23:12:41.855935   28839 main.go:141] libmachine: SSH cmd err, output: <nil>: 1722899561.810527968
	
	I0805 23:12:41.855959   28839 fix.go:216] guest clock: 1722899561.810527968
	I0805 23:12:41.855970   28839 fix.go:229] Guest: 2024-08-05 23:12:41.810527968 +0000 UTC Remote: 2024-08-05 23:12:41.73805629 +0000 UTC m=+161.054044407 (delta=72.471678ms)
	I0805 23:12:41.855989   28839 fix.go:200] guest clock delta is within tolerance: 72.471678ms
	I0805 23:12:41.855996   28839 start.go:83] releasing machines lock for "ha-044175-m03", held for 25.715587212s
	I0805 23:12:41.856020   28839 main.go:141] libmachine: (ha-044175-m03) Calling .DriverName
	I0805 23:12:41.856341   28839 main.go:141] libmachine: (ha-044175-m03) Calling .GetIP
	I0805 23:12:41.859354   28839 main.go:141] libmachine: (ha-044175-m03) DBG | domain ha-044175-m03 has defined MAC address 52:54:00:f4:37:04 in network mk-ha-044175
	I0805 23:12:41.859743   28839 main.go:141] libmachine: (ha-044175-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f4:37:04", ip: ""} in network mk-ha-044175: {Iface:virbr1 ExpiryTime:2024-08-06 00:12:31 +0000 UTC Type:0 Mac:52:54:00:f4:37:04 Iaid: IPaddr:192.168.39.201 Prefix:24 Hostname:ha-044175-m03 Clientid:01:52:54:00:f4:37:04}
	I0805 23:12:41.859771   28839 main.go:141] libmachine: (ha-044175-m03) DBG | domain ha-044175-m03 has defined IP address 192.168.39.201 and MAC address 52:54:00:f4:37:04 in network mk-ha-044175
	I0805 23:12:41.861941   28839 out.go:177] * Found network options:
	I0805 23:12:41.863319   28839 out.go:177]   - NO_PROXY=192.168.39.57,192.168.39.112
	W0805 23:12:41.864893   28839 proxy.go:119] fail to check proxy env: Error ip not in block
	W0805 23:12:41.864921   28839 proxy.go:119] fail to check proxy env: Error ip not in block
	I0805 23:12:41.864938   28839 main.go:141] libmachine: (ha-044175-m03) Calling .DriverName
	I0805 23:12:41.865418   28839 main.go:141] libmachine: (ha-044175-m03) Calling .DriverName
	I0805 23:12:41.865628   28839 main.go:141] libmachine: (ha-044175-m03) Calling .DriverName
	I0805 23:12:41.865738   28839 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0805 23:12:41.865802   28839 main.go:141] libmachine: (ha-044175-m03) Calling .GetSSHHostname
	W0805 23:12:41.865800   28839 proxy.go:119] fail to check proxy env: Error ip not in block
	W0805 23:12:41.865846   28839 proxy.go:119] fail to check proxy env: Error ip not in block
	I0805 23:12:41.865945   28839 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0805 23:12:41.865967   28839 main.go:141] libmachine: (ha-044175-m03) Calling .GetSSHHostname
	I0805 23:12:41.868825   28839 main.go:141] libmachine: (ha-044175-m03) DBG | domain ha-044175-m03 has defined MAC address 52:54:00:f4:37:04 in network mk-ha-044175
	I0805 23:12:41.868845   28839 main.go:141] libmachine: (ha-044175-m03) DBG | domain ha-044175-m03 has defined MAC address 52:54:00:f4:37:04 in network mk-ha-044175
	I0805 23:12:41.869287   28839 main.go:141] libmachine: (ha-044175-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f4:37:04", ip: ""} in network mk-ha-044175: {Iface:virbr1 ExpiryTime:2024-08-06 00:12:31 +0000 UTC Type:0 Mac:52:54:00:f4:37:04 Iaid: IPaddr:192.168.39.201 Prefix:24 Hostname:ha-044175-m03 Clientid:01:52:54:00:f4:37:04}
	I0805 23:12:41.869313   28839 main.go:141] libmachine: (ha-044175-m03) DBG | domain ha-044175-m03 has defined IP address 192.168.39.201 and MAC address 52:54:00:f4:37:04 in network mk-ha-044175
	I0805 23:12:41.869337   28839 main.go:141] libmachine: (ha-044175-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f4:37:04", ip: ""} in network mk-ha-044175: {Iface:virbr1 ExpiryTime:2024-08-06 00:12:31 +0000 UTC Type:0 Mac:52:54:00:f4:37:04 Iaid: IPaddr:192.168.39.201 Prefix:24 Hostname:ha-044175-m03 Clientid:01:52:54:00:f4:37:04}
	I0805 23:12:41.869355   28839 main.go:141] libmachine: (ha-044175-m03) DBG | domain ha-044175-m03 has defined IP address 192.168.39.201 and MAC address 52:54:00:f4:37:04 in network mk-ha-044175
	I0805 23:12:41.869435   28839 main.go:141] libmachine: (ha-044175-m03) Calling .GetSSHPort
	I0805 23:12:41.869563   28839 main.go:141] libmachine: (ha-044175-m03) Calling .GetSSHPort
	I0805 23:12:41.869643   28839 main.go:141] libmachine: (ha-044175-m03) Calling .GetSSHKeyPath
	I0805 23:12:41.869711   28839 main.go:141] libmachine: (ha-044175-m03) Calling .GetSSHKeyPath
	I0805 23:12:41.869772   28839 main.go:141] libmachine: (ha-044175-m03) Calling .GetSSHUsername
	I0805 23:12:41.869836   28839 main.go:141] libmachine: (ha-044175-m03) Calling .GetSSHUsername
	I0805 23:12:41.869893   28839 sshutil.go:53] new ssh client: &{IP:192.168.39.201 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19373-9606/.minikube/machines/ha-044175-m03/id_rsa Username:docker}
	I0805 23:12:41.869990   28839 sshutil.go:53] new ssh client: &{IP:192.168.39.201 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19373-9606/.minikube/machines/ha-044175-m03/id_rsa Username:docker}
	I0805 23:12:42.115392   28839 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0805 23:12:42.121242   28839 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0805 23:12:42.121300   28839 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0805 23:12:42.138419   28839 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0805 23:12:42.138445   28839 start.go:495] detecting cgroup driver to use...
	I0805 23:12:42.138512   28839 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0805 23:12:42.154940   28839 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0805 23:12:42.171891   28839 docker.go:217] disabling cri-docker service (if available) ...
	I0805 23:12:42.171955   28839 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0805 23:12:42.187452   28839 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0805 23:12:42.203635   28839 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0805 23:12:42.331363   28839 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0805 23:12:42.486729   28839 docker.go:233] disabling docker service ...
	I0805 23:12:42.486813   28839 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0805 23:12:42.502563   28839 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0805 23:12:42.516833   28839 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0805 23:12:42.653003   28839 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0805 23:12:42.782159   28839 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0805 23:12:42.797842   28839 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0805 23:12:42.816825   28839 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I0805 23:12:42.816891   28839 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0805 23:12:42.827670   28839 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0805 23:12:42.827745   28839 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0805 23:12:42.838303   28839 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0805 23:12:42.849311   28839 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0805 23:12:42.860901   28839 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0805 23:12:42.871683   28839 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0805 23:12:42.883404   28839 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0805 23:12:42.903914   28839 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0805 23:12:42.914926   28839 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0805 23:12:42.924481   28839 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0805 23:12:42.924551   28839 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0805 23:12:42.937387   28839 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0805 23:12:42.947466   28839 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0805 23:12:43.065481   28839 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0805 23:12:43.220596   28839 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0805 23:12:43.220678   28839 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0805 23:12:43.225780   28839 start.go:563] Will wait 60s for crictl version
	I0805 23:12:43.225839   28839 ssh_runner.go:195] Run: which crictl
	I0805 23:12:43.229784   28839 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0805 23:12:43.273939   28839 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0805 23:12:43.274031   28839 ssh_runner.go:195] Run: crio --version
	I0805 23:12:43.306047   28839 ssh_runner.go:195] Run: crio --version
	I0805 23:12:43.338481   28839 out.go:177] * Preparing Kubernetes v1.30.3 on CRI-O 1.29.1 ...
	I0805 23:12:43.340246   28839 out.go:177]   - env NO_PROXY=192.168.39.57
	I0805 23:12:43.341615   28839 out.go:177]   - env NO_PROXY=192.168.39.57,192.168.39.112
	I0805 23:12:43.343026   28839 main.go:141] libmachine: (ha-044175-m03) Calling .GetIP
	I0805 23:12:43.346432   28839 main.go:141] libmachine: (ha-044175-m03) DBG | domain ha-044175-m03 has defined MAC address 52:54:00:f4:37:04 in network mk-ha-044175
	I0805 23:12:43.346881   28839 main.go:141] libmachine: (ha-044175-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f4:37:04", ip: ""} in network mk-ha-044175: {Iface:virbr1 ExpiryTime:2024-08-06 00:12:31 +0000 UTC Type:0 Mac:52:54:00:f4:37:04 Iaid: IPaddr:192.168.39.201 Prefix:24 Hostname:ha-044175-m03 Clientid:01:52:54:00:f4:37:04}
	I0805 23:12:43.346908   28839 main.go:141] libmachine: (ha-044175-m03) DBG | domain ha-044175-m03 has defined IP address 192.168.39.201 and MAC address 52:54:00:f4:37:04 in network mk-ha-044175
	I0805 23:12:43.347212   28839 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0805 23:12:43.351889   28839 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0805 23:12:43.365732   28839 mustload.go:65] Loading cluster: ha-044175
	I0805 23:12:43.365972   28839 config.go:182] Loaded profile config "ha-044175": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0805 23:12:43.366273   28839 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0805 23:12:43.366316   28839 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0805 23:12:43.380599   28839 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35391
	I0805 23:12:43.381039   28839 main.go:141] libmachine: () Calling .GetVersion
	I0805 23:12:43.381493   28839 main.go:141] libmachine: Using API Version  1
	I0805 23:12:43.381519   28839 main.go:141] libmachine: () Calling .SetConfigRaw
	I0805 23:12:43.381916   28839 main.go:141] libmachine: () Calling .GetMachineName
	I0805 23:12:43.382176   28839 main.go:141] libmachine: (ha-044175) Calling .GetState
	I0805 23:12:43.383977   28839 host.go:66] Checking if "ha-044175" exists ...
	I0805 23:12:43.384352   28839 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0805 23:12:43.384402   28839 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0805 23:12:43.399465   28839 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:32885
	I0805 23:12:43.399903   28839 main.go:141] libmachine: () Calling .GetVersion
	I0805 23:12:43.400360   28839 main.go:141] libmachine: Using API Version  1
	I0805 23:12:43.400384   28839 main.go:141] libmachine: () Calling .SetConfigRaw
	I0805 23:12:43.400658   28839 main.go:141] libmachine: () Calling .GetMachineName
	I0805 23:12:43.400818   28839 main.go:141] libmachine: (ha-044175) Calling .DriverName
	I0805 23:12:43.400963   28839 certs.go:68] Setting up /home/jenkins/minikube-integration/19373-9606/.minikube/profiles/ha-044175 for IP: 192.168.39.201
	I0805 23:12:43.400972   28839 certs.go:194] generating shared ca certs ...
	I0805 23:12:43.400984   28839 certs.go:226] acquiring lock for ca certs: {Name:mkf35a042c1656d191f542eee7fa087aad4d29d8 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0805 23:12:43.401114   28839 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19373-9606/.minikube/ca.key
	I0805 23:12:43.401169   28839 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19373-9606/.minikube/proxy-client-ca.key
	I0805 23:12:43.401182   28839 certs.go:256] generating profile certs ...
	I0805 23:12:43.401266   28839 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19373-9606/.minikube/profiles/ha-044175/client.key
	I0805 23:12:43.401298   28839 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/19373-9606/.minikube/profiles/ha-044175/apiserver.key.ce298ff1
	I0805 23:12:43.401313   28839 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19373-9606/.minikube/profiles/ha-044175/apiserver.crt.ce298ff1 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.39.57 192.168.39.112 192.168.39.201 192.168.39.254]
	I0805 23:12:43.614914   28839 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19373-9606/.minikube/profiles/ha-044175/apiserver.crt.ce298ff1 ...
	I0805 23:12:43.614942   28839 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19373-9606/.minikube/profiles/ha-044175/apiserver.crt.ce298ff1: {Name:mkb3dfb2f5fd0b26a6a36cb6f006f2202db1b3f5 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0805 23:12:43.615116   28839 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19373-9606/.minikube/profiles/ha-044175/apiserver.key.ce298ff1 ...
	I0805 23:12:43.615132   28839 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19373-9606/.minikube/profiles/ha-044175/apiserver.key.ce298ff1: {Name:mkc9fb59d0e5374772bfc7d4f2f4f67d3ffc06b6 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0805 23:12:43.615210   28839 certs.go:381] copying /home/jenkins/minikube-integration/19373-9606/.minikube/profiles/ha-044175/apiserver.crt.ce298ff1 -> /home/jenkins/minikube-integration/19373-9606/.minikube/profiles/ha-044175/apiserver.crt
	I0805 23:12:43.615329   28839 certs.go:385] copying /home/jenkins/minikube-integration/19373-9606/.minikube/profiles/ha-044175/apiserver.key.ce298ff1 -> /home/jenkins/minikube-integration/19373-9606/.minikube/profiles/ha-044175/apiserver.key
	I0805 23:12:43.615451   28839 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19373-9606/.minikube/profiles/ha-044175/proxy-client.key
	I0805 23:12:43.615465   28839 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19373-9606/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I0805 23:12:43.615478   28839 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19373-9606/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I0805 23:12:43.615491   28839 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19373-9606/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0805 23:12:43.615504   28839 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19373-9606/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0805 23:12:43.615516   28839 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19373-9606/.minikube/profiles/ha-044175/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0805 23:12:43.615529   28839 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19373-9606/.minikube/profiles/ha-044175/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0805 23:12:43.615541   28839 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19373-9606/.minikube/profiles/ha-044175/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0805 23:12:43.615553   28839 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19373-9606/.minikube/profiles/ha-044175/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0805 23:12:43.615605   28839 certs.go:484] found cert: /home/jenkins/minikube-integration/19373-9606/.minikube/certs/16792.pem (1338 bytes)
	W0805 23:12:43.615631   28839 certs.go:480] ignoring /home/jenkins/minikube-integration/19373-9606/.minikube/certs/16792_empty.pem, impossibly tiny 0 bytes
	I0805 23:12:43.615640   28839 certs.go:484] found cert: /home/jenkins/minikube-integration/19373-9606/.minikube/certs/ca-key.pem (1679 bytes)
	I0805 23:12:43.615662   28839 certs.go:484] found cert: /home/jenkins/minikube-integration/19373-9606/.minikube/certs/ca.pem (1082 bytes)
	I0805 23:12:43.615682   28839 certs.go:484] found cert: /home/jenkins/minikube-integration/19373-9606/.minikube/certs/cert.pem (1123 bytes)
	I0805 23:12:43.615702   28839 certs.go:484] found cert: /home/jenkins/minikube-integration/19373-9606/.minikube/certs/key.pem (1679 bytes)
	I0805 23:12:43.615737   28839 certs.go:484] found cert: /home/jenkins/minikube-integration/19373-9606/.minikube/files/etc/ssl/certs/167922.pem (1708 bytes)
	I0805 23:12:43.615761   28839 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19373-9606/.minikube/certs/16792.pem -> /usr/share/ca-certificates/16792.pem
	I0805 23:12:43.615774   28839 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19373-9606/.minikube/files/etc/ssl/certs/167922.pem -> /usr/share/ca-certificates/167922.pem
	I0805 23:12:43.615788   28839 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19373-9606/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0805 23:12:43.615821   28839 main.go:141] libmachine: (ha-044175) Calling .GetSSHHostname
	I0805 23:12:43.618655   28839 main.go:141] libmachine: (ha-044175) DBG | domain ha-044175 has defined MAC address 52:54:00:d0:5f:e4 in network mk-ha-044175
	I0805 23:12:43.619118   28839 main.go:141] libmachine: (ha-044175) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d0:5f:e4", ip: ""} in network mk-ha-044175: {Iface:virbr1 ExpiryTime:2024-08-06 00:10:15 +0000 UTC Type:0 Mac:52:54:00:d0:5f:e4 Iaid: IPaddr:192.168.39.57 Prefix:24 Hostname:ha-044175 Clientid:01:52:54:00:d0:5f:e4}
	I0805 23:12:43.619144   28839 main.go:141] libmachine: (ha-044175) DBG | domain ha-044175 has defined IP address 192.168.39.57 and MAC address 52:54:00:d0:5f:e4 in network mk-ha-044175
	I0805 23:12:43.619328   28839 main.go:141] libmachine: (ha-044175) Calling .GetSSHPort
	I0805 23:12:43.619495   28839 main.go:141] libmachine: (ha-044175) Calling .GetSSHKeyPath
	I0805 23:12:43.619657   28839 main.go:141] libmachine: (ha-044175) Calling .GetSSHUsername
	I0805 23:12:43.619755   28839 sshutil.go:53] new ssh client: &{IP:192.168.39.57 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19373-9606/.minikube/machines/ha-044175/id_rsa Username:docker}
	I0805 23:12:43.691394   28839 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/sa.pub
	I0805 23:12:43.697185   28839 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.pub --> memory (451 bytes)
	I0805 23:12:43.709581   28839 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/sa.key
	I0805 23:12:43.714326   28839 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.key --> memory (1675 bytes)
	I0805 23:12:43.726125   28839 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/front-proxy-ca.crt
	I0805 23:12:43.730719   28839 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.crt --> memory (1123 bytes)
	I0805 23:12:43.749305   28839 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/front-proxy-ca.key
	I0805 23:12:43.753726   28839 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.key --> memory (1675 bytes)
	I0805 23:12:43.764044   28839 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/etcd/ca.crt
	I0805 23:12:43.768596   28839 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.crt --> memory (1094 bytes)
	I0805 23:12:43.781828   28839 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/etcd/ca.key
	I0805 23:12:43.787289   28839 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.key --> memory (1679 bytes)
	I0805 23:12:43.803290   28839 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19373-9606/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0805 23:12:43.831625   28839 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19373-9606/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0805 23:12:43.857350   28839 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19373-9606/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0805 23:12:43.881128   28839 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19373-9606/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0805 23:12:43.905585   28839 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19373-9606/.minikube/profiles/ha-044175/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1444 bytes)
	I0805 23:12:43.929656   28839 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19373-9606/.minikube/profiles/ha-044175/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0805 23:12:43.955804   28839 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19373-9606/.minikube/profiles/ha-044175/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0805 23:12:43.979926   28839 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19373-9606/.minikube/profiles/ha-044175/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0805 23:12:44.004002   28839 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19373-9606/.minikube/certs/16792.pem --> /usr/share/ca-certificates/16792.pem (1338 bytes)
	I0805 23:12:44.029378   28839 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19373-9606/.minikube/files/etc/ssl/certs/167922.pem --> /usr/share/ca-certificates/167922.pem (1708 bytes)
	I0805 23:12:44.058121   28839 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19373-9606/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0805 23:12:44.082413   28839 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.pub (451 bytes)
	I0805 23:12:44.100775   28839 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.key (1675 bytes)
	I0805 23:12:44.119482   28839 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.crt (1123 bytes)
	I0805 23:12:44.136137   28839 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.key (1675 bytes)
	I0805 23:12:44.153074   28839 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.crt (1094 bytes)
	I0805 23:12:44.170347   28839 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.key (1679 bytes)
	I0805 23:12:44.187745   28839 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (744 bytes)
	I0805 23:12:44.205325   28839 ssh_runner.go:195] Run: openssl version
	I0805 23:12:44.211566   28839 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0805 23:12:44.224098   28839 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0805 23:12:44.228763   28839 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Aug  5 22:50 /usr/share/ca-certificates/minikubeCA.pem
	I0805 23:12:44.228825   28839 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0805 23:12:44.234887   28839 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0805 23:12:44.246481   28839 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/16792.pem && ln -fs /usr/share/ca-certificates/16792.pem /etc/ssl/certs/16792.pem"
	I0805 23:12:44.257667   28839 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/16792.pem
	I0805 23:12:44.262354   28839 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Aug  5 23:03 /usr/share/ca-certificates/16792.pem
	I0805 23:12:44.262415   28839 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/16792.pem
	I0805 23:12:44.268104   28839 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/16792.pem /etc/ssl/certs/51391683.0"
	I0805 23:12:44.279023   28839 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/167922.pem && ln -fs /usr/share/ca-certificates/167922.pem /etc/ssl/certs/167922.pem"
	I0805 23:12:44.290198   28839 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/167922.pem
	I0805 23:12:44.294735   28839 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Aug  5 23:03 /usr/share/ca-certificates/167922.pem
	I0805 23:12:44.294797   28839 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/167922.pem
	I0805 23:12:44.300670   28839 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/167922.pem /etc/ssl/certs/3ec20f2e.0"
	I0805 23:12:44.311822   28839 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0805 23:12:44.316292   28839 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0805 23:12:44.316337   28839 kubeadm.go:934] updating node {m03 192.168.39.201 8443 v1.30.3 crio true true} ...
	I0805 23:12:44.316414   28839 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.30.3/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=ha-044175-m03 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.201
	
	[Install]
	 config:
	{KubernetesVersion:v1.30.3 ClusterName:ha-044175 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0805 23:12:44.316438   28839 kube-vip.go:115] generating kube-vip config ...
	I0805 23:12:44.316471   28839 ssh_runner.go:195] Run: sudo sh -c "modprobe --all ip_vs ip_vs_rr ip_vs_wrr ip_vs_sh nf_conntrack"
	I0805 23:12:44.334138   28839 kube-vip.go:167] auto-enabling control-plane load-balancing in kube-vip
	I0805 23:12:44.334214   28839 kube-vip.go:137] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_nodename
	      valueFrom:
	        fieldRef:
	          fieldPath: spec.nodeName
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 192.168.39.254
	    - name: prometheus_server
	      value: :2112
	    - name : lb_enable
	      value: "true"
	    - name: lb_port
	      value: "8443"
	    image: ghcr.io/kube-vip/kube-vip:v0.8.0
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/admin.conf"
	    name: kubeconfig
	status: {}
	I0805 23:12:44.334273   28839 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.30.3
	I0805 23:12:44.344314   28839 binaries.go:47] Didn't find k8s binaries: sudo ls /var/lib/minikube/binaries/v1.30.3: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/binaries/v1.30.3': No such file or directory
	
	Initiating transfer...
	I0805 23:12:44.344379   28839 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/binaries/v1.30.3
	I0805 23:12:44.354301   28839 binary.go:74] Not caching binary, using https://dl.k8s.io/release/v1.30.3/bin/linux/amd64/kubectl?checksum=file:https://dl.k8s.io/release/v1.30.3/bin/linux/amd64/kubectl.sha256
	I0805 23:12:44.354334   28839 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19373-9606/.minikube/cache/linux/amd64/v1.30.3/kubectl -> /var/lib/minikube/binaries/v1.30.3/kubectl
	I0805 23:12:44.354334   28839 binary.go:74] Not caching binary, using https://dl.k8s.io/release/v1.30.3/bin/linux/amd64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.30.3/bin/linux/amd64/kubeadm.sha256
	I0805 23:12:44.354351   28839 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19373-9606/.minikube/cache/linux/amd64/v1.30.3/kubeadm -> /var/lib/minikube/binaries/v1.30.3/kubeadm
	I0805 23:12:44.354421   28839 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.30.3/kubeadm
	I0805 23:12:44.354426   28839 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.30.3/kubectl
	I0805 23:12:44.354306   28839 binary.go:74] Not caching binary, using https://dl.k8s.io/release/v1.30.3/bin/linux/amd64/kubelet?checksum=file:https://dl.k8s.io/release/v1.30.3/bin/linux/amd64/kubelet.sha256
	I0805 23:12:44.354475   28839 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0805 23:12:44.370943   28839 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.30.3/kubeadm: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.30.3/kubeadm: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.30.3/kubeadm': No such file or directory
	I0805 23:12:44.370977   28839 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19373-9606/.minikube/cache/linux/amd64/v1.30.3/kubelet -> /var/lib/minikube/binaries/v1.30.3/kubelet
	I0805 23:12:44.370990   28839 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19373-9606/.minikube/cache/linux/amd64/v1.30.3/kubeadm --> /var/lib/minikube/binaries/v1.30.3/kubeadm (50249880 bytes)
	I0805 23:12:44.371016   28839 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.30.3/kubectl: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.30.3/kubectl: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.30.3/kubectl': No such file or directory
	I0805 23:12:44.371059   28839 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19373-9606/.minikube/cache/linux/amd64/v1.30.3/kubectl --> /var/lib/minikube/binaries/v1.30.3/kubectl (51454104 bytes)
	I0805 23:12:44.371086   28839 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.30.3/kubelet
	I0805 23:12:44.407258   28839 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.30.3/kubelet: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.30.3/kubelet: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.30.3/kubelet': No such file or directory
	I0805 23:12:44.407302   28839 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19373-9606/.minikube/cache/linux/amd64/v1.30.3/kubelet --> /var/lib/minikube/binaries/v1.30.3/kubelet (100125080 bytes)
	I0805 23:12:45.341946   28839 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /etc/kubernetes/manifests
	I0805 23:12:45.351916   28839 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (313 bytes)
	I0805 23:12:45.369055   28839 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0805 23:12:45.387580   28839 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1441 bytes)
	I0805 23:12:45.406382   28839 ssh_runner.go:195] Run: grep 192.168.39.254	control-plane.minikube.internal$ /etc/hosts
	I0805 23:12:45.410465   28839 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.254	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0805 23:12:45.424113   28839 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0805 23:12:45.551578   28839 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0805 23:12:45.569837   28839 host.go:66] Checking if "ha-044175" exists ...
	I0805 23:12:45.570306   28839 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0805 23:12:45.570365   28839 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0805 23:12:45.587123   28839 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42951
	I0805 23:12:45.587619   28839 main.go:141] libmachine: () Calling .GetVersion
	I0805 23:12:45.588193   28839 main.go:141] libmachine: Using API Version  1
	I0805 23:12:45.588219   28839 main.go:141] libmachine: () Calling .SetConfigRaw
	I0805 23:12:45.588651   28839 main.go:141] libmachine: () Calling .GetMachineName
	I0805 23:12:45.588877   28839 main.go:141] libmachine: (ha-044175) Calling .DriverName
	I0805 23:12:45.589052   28839 start.go:317] joinCluster: &{Name:ha-044175 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19339/minikube-v1.33.1-1722248113-19339-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.3 Cluster
Name:ha-044175 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.57 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.112 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m03 IP:192.168.39.201 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false i
nspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOp
timizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0805 23:12:45.589217   28839 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm token create --print-join-command --ttl=0"
	I0805 23:12:45.589235   28839 main.go:141] libmachine: (ha-044175) Calling .GetSSHHostname
	I0805 23:12:45.592685   28839 main.go:141] libmachine: (ha-044175) DBG | domain ha-044175 has defined MAC address 52:54:00:d0:5f:e4 in network mk-ha-044175
	I0805 23:12:45.593174   28839 main.go:141] libmachine: (ha-044175) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d0:5f:e4", ip: ""} in network mk-ha-044175: {Iface:virbr1 ExpiryTime:2024-08-06 00:10:15 +0000 UTC Type:0 Mac:52:54:00:d0:5f:e4 Iaid: IPaddr:192.168.39.57 Prefix:24 Hostname:ha-044175 Clientid:01:52:54:00:d0:5f:e4}
	I0805 23:12:45.593202   28839 main.go:141] libmachine: (ha-044175) DBG | domain ha-044175 has defined IP address 192.168.39.57 and MAC address 52:54:00:d0:5f:e4 in network mk-ha-044175
	I0805 23:12:45.593368   28839 main.go:141] libmachine: (ha-044175) Calling .GetSSHPort
	I0805 23:12:45.593539   28839 main.go:141] libmachine: (ha-044175) Calling .GetSSHKeyPath
	I0805 23:12:45.593687   28839 main.go:141] libmachine: (ha-044175) Calling .GetSSHUsername
	I0805 23:12:45.593824   28839 sshutil.go:53] new ssh client: &{IP:192.168.39.57 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19373-9606/.minikube/machines/ha-044175/id_rsa Username:docker}
	I0805 23:12:45.759535   28839 start.go:343] trying to join control-plane node "m03" to cluster: &{Name:m03 IP:192.168.39.201 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0805 23:12:45.759586   28839 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm join control-plane.minikube.internal:8443 --token lo1e8m.q8g3oakxetdywsfy --discovery-token-ca-cert-hash sha256:80c3f4a7caafd825f47d5f536053424d1d775e8da247cc5594b6b717e711fcd3 --ignore-preflight-errors=all --cri-socket unix:///var/run/crio/crio.sock --node-name=ha-044175-m03 --control-plane --apiserver-advertise-address=192.168.39.201 --apiserver-bind-port=8443"
	I0805 23:13:09.340790   28839 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm join control-plane.minikube.internal:8443 --token lo1e8m.q8g3oakxetdywsfy --discovery-token-ca-cert-hash sha256:80c3f4a7caafd825f47d5f536053424d1d775e8da247cc5594b6b717e711fcd3 --ignore-preflight-errors=all --cri-socket unix:///var/run/crio/crio.sock --node-name=ha-044175-m03 --control-plane --apiserver-advertise-address=192.168.39.201 --apiserver-bind-port=8443": (23.581172917s)
	I0805 23:13:09.340833   28839 ssh_runner.go:195] Run: /bin/bash -c "sudo systemctl daemon-reload && sudo systemctl enable kubelet && sudo systemctl start kubelet"
	I0805 23:13:09.922208   28839 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes ha-044175-m03 minikube.k8s.io/updated_at=2024_08_05T23_13_09_0700 minikube.k8s.io/version=v1.33.1 minikube.k8s.io/commit=a179f531dd2dbe55e0d6074abcbc378280f91bb4 minikube.k8s.io/name=ha-044175 minikube.k8s.io/primary=false
	I0805 23:13:10.105507   28839 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig taint nodes ha-044175-m03 node-role.kubernetes.io/control-plane:NoSchedule-
	I0805 23:13:10.255386   28839 start.go:319] duration metric: took 24.666330259s to joinCluster
	I0805 23:13:10.255462   28839 start.go:235] Will wait 6m0s for node &{Name:m03 IP:192.168.39.201 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0805 23:13:10.255896   28839 config.go:182] Loaded profile config "ha-044175": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0805 23:13:10.257355   28839 out.go:177] * Verifying Kubernetes components...
	I0805 23:13:10.258896   28839 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0805 23:13:10.552807   28839 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0805 23:13:10.578730   28839 loader.go:395] Config loaded from file:  /home/jenkins/minikube-integration/19373-9606/kubeconfig
	I0805 23:13:10.579104   28839 kapi.go:59] client config for ha-044175: &rest.Config{Host:"https://192.168.39.254:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/19373-9606/.minikube/profiles/ha-044175/client.crt", KeyFile:"/home/jenkins/minikube-integration/19373-9606/.minikube/profiles/ha-044175/client.key", CAFile:"/home/jenkins/minikube-integration/19373-9606/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(nil)},
UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1d02940), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	W0805 23:13:10.579208   28839 kubeadm.go:483] Overriding stale ClientConfig host https://192.168.39.254:8443 with https://192.168.39.57:8443
	I0805 23:13:10.579501   28839 node_ready.go:35] waiting up to 6m0s for node "ha-044175-m03" to be "Ready" ...
	I0805 23:13:10.579611   28839 round_trippers.go:463] GET https://192.168.39.57:8443/api/v1/nodes/ha-044175-m03
	I0805 23:13:10.579623   28839 round_trippers.go:469] Request Headers:
	I0805 23:13:10.579635   28839 round_trippers.go:473]     Accept: application/json, */*
	I0805 23:13:10.579644   28839 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0805 23:13:10.583792   28839 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0805 23:13:11.079657   28839 round_trippers.go:463] GET https://192.168.39.57:8443/api/v1/nodes/ha-044175-m03
	I0805 23:13:11.079680   28839 round_trippers.go:469] Request Headers:
	I0805 23:13:11.079689   28839 round_trippers.go:473]     Accept: application/json, */*
	I0805 23:13:11.079693   28839 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0805 23:13:11.083673   28839 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0805 23:13:11.579982   28839 round_trippers.go:463] GET https://192.168.39.57:8443/api/v1/nodes/ha-044175-m03
	I0805 23:13:11.580007   28839 round_trippers.go:469] Request Headers:
	I0805 23:13:11.580020   28839 round_trippers.go:473]     Accept: application/json, */*
	I0805 23:13:11.580026   28839 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0805 23:13:11.584352   28839 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0805 23:13:12.080378   28839 round_trippers.go:463] GET https://192.168.39.57:8443/api/v1/nodes/ha-044175-m03
	I0805 23:13:12.080406   28839 round_trippers.go:469] Request Headers:
	I0805 23:13:12.080419   28839 round_trippers.go:473]     Accept: application/json, */*
	I0805 23:13:12.080424   28839 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0805 23:13:12.084588   28839 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0805 23:13:12.580345   28839 round_trippers.go:463] GET https://192.168.39.57:8443/api/v1/nodes/ha-044175-m03
	I0805 23:13:12.580365   28839 round_trippers.go:469] Request Headers:
	I0805 23:13:12.580375   28839 round_trippers.go:473]     Accept: application/json, */*
	I0805 23:13:12.580382   28839 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0805 23:13:12.584471   28839 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0805 23:13:12.585148   28839 node_ready.go:53] node "ha-044175-m03" has status "Ready":"False"
	I0805 23:13:13.080341   28839 round_trippers.go:463] GET https://192.168.39.57:8443/api/v1/nodes/ha-044175-m03
	I0805 23:13:13.080360   28839 round_trippers.go:469] Request Headers:
	I0805 23:13:13.080370   28839 round_trippers.go:473]     Accept: application/json, */*
	I0805 23:13:13.080375   28839 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0805 23:13:13.083973   28839 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0805 23:13:13.579798   28839 round_trippers.go:463] GET https://192.168.39.57:8443/api/v1/nodes/ha-044175-m03
	I0805 23:13:13.579829   28839 round_trippers.go:469] Request Headers:
	I0805 23:13:13.579840   28839 round_trippers.go:473]     Accept: application/json, */*
	I0805 23:13:13.579847   28839 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0805 23:13:13.583870   28839 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0805 23:13:14.080325   28839 round_trippers.go:463] GET https://192.168.39.57:8443/api/v1/nodes/ha-044175-m03
	I0805 23:13:14.080352   28839 round_trippers.go:469] Request Headers:
	I0805 23:13:14.080363   28839 round_trippers.go:473]     Accept: application/json, */*
	I0805 23:13:14.080369   28839 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0805 23:13:14.084824   28839 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0805 23:13:14.579749   28839 round_trippers.go:463] GET https://192.168.39.57:8443/api/v1/nodes/ha-044175-m03
	I0805 23:13:14.579772   28839 round_trippers.go:469] Request Headers:
	I0805 23:13:14.579783   28839 round_trippers.go:473]     Accept: application/json, */*
	I0805 23:13:14.579791   28839 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0805 23:13:14.583727   28839 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0805 23:13:15.080276   28839 round_trippers.go:463] GET https://192.168.39.57:8443/api/v1/nodes/ha-044175-m03
	I0805 23:13:15.080302   28839 round_trippers.go:469] Request Headers:
	I0805 23:13:15.080312   28839 round_trippers.go:473]     Accept: application/json, */*
	I0805 23:13:15.080317   28839 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0805 23:13:15.084036   28839 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0805 23:13:15.084802   28839 node_ready.go:53] node "ha-044175-m03" has status "Ready":"False"
	I0805 23:13:15.580083   28839 round_trippers.go:463] GET https://192.168.39.57:8443/api/v1/nodes/ha-044175-m03
	I0805 23:13:15.580116   28839 round_trippers.go:469] Request Headers:
	I0805 23:13:15.580123   28839 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0805 23:13:15.580128   28839 round_trippers.go:473]     Accept: application/json, */*
	I0805 23:13:15.584100   28839 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0805 23:13:16.080115   28839 round_trippers.go:463] GET https://192.168.39.57:8443/api/v1/nodes/ha-044175-m03
	I0805 23:13:16.080141   28839 round_trippers.go:469] Request Headers:
	I0805 23:13:16.080154   28839 round_trippers.go:473]     Accept: application/json, */*
	I0805 23:13:16.080159   28839 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0805 23:13:16.083872   28839 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0805 23:13:16.580494   28839 round_trippers.go:463] GET https://192.168.39.57:8443/api/v1/nodes/ha-044175-m03
	I0805 23:13:16.580513   28839 round_trippers.go:469] Request Headers:
	I0805 23:13:16.580521   28839 round_trippers.go:473]     Accept: application/json, */*
	I0805 23:13:16.580525   28839 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0805 23:13:16.585175   28839 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0805 23:13:17.080287   28839 round_trippers.go:463] GET https://192.168.39.57:8443/api/v1/nodes/ha-044175-m03
	I0805 23:13:17.080310   28839 round_trippers.go:469] Request Headers:
	I0805 23:13:17.080322   28839 round_trippers.go:473]     Accept: application/json, */*
	I0805 23:13:17.080327   28839 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0805 23:13:17.085307   28839 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0805 23:13:17.086077   28839 node_ready.go:53] node "ha-044175-m03" has status "Ready":"False"
	I0805 23:13:17.580296   28839 round_trippers.go:463] GET https://192.168.39.57:8443/api/v1/nodes/ha-044175-m03
	I0805 23:13:17.580320   28839 round_trippers.go:469] Request Headers:
	I0805 23:13:17.580329   28839 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0805 23:13:17.580333   28839 round_trippers.go:473]     Accept: application/json, */*
	I0805 23:13:17.584316   28839 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0805 23:13:18.079775   28839 round_trippers.go:463] GET https://192.168.39.57:8443/api/v1/nodes/ha-044175-m03
	I0805 23:13:18.079799   28839 round_trippers.go:469] Request Headers:
	I0805 23:13:18.079811   28839 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0805 23:13:18.079815   28839 round_trippers.go:473]     Accept: application/json, */*
	I0805 23:13:18.084252   28839 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0805 23:13:18.580643   28839 round_trippers.go:463] GET https://192.168.39.57:8443/api/v1/nodes/ha-044175-m03
	I0805 23:13:18.580673   28839 round_trippers.go:469] Request Headers:
	I0805 23:13:18.580684   28839 round_trippers.go:473]     Accept: application/json, */*
	I0805 23:13:18.580689   28839 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0805 23:13:18.584405   28839 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0805 23:13:19.080269   28839 round_trippers.go:463] GET https://192.168.39.57:8443/api/v1/nodes/ha-044175-m03
	I0805 23:13:19.080290   28839 round_trippers.go:469] Request Headers:
	I0805 23:13:19.080299   28839 round_trippers.go:473]     Accept: application/json, */*
	I0805 23:13:19.080303   28839 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0805 23:13:19.084831   28839 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0805 23:13:19.579856   28839 round_trippers.go:463] GET https://192.168.39.57:8443/api/v1/nodes/ha-044175-m03
	I0805 23:13:19.579892   28839 round_trippers.go:469] Request Headers:
	I0805 23:13:19.579902   28839 round_trippers.go:473]     Accept: application/json, */*
	I0805 23:13:19.579907   28839 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0805 23:13:19.583303   28839 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0805 23:13:19.584078   28839 node_ready.go:53] node "ha-044175-m03" has status "Ready":"False"
	I0805 23:13:20.079828   28839 round_trippers.go:463] GET https://192.168.39.57:8443/api/v1/nodes/ha-044175-m03
	I0805 23:13:20.079864   28839 round_trippers.go:469] Request Headers:
	I0805 23:13:20.079889   28839 round_trippers.go:473]     Accept: application/json, */*
	I0805 23:13:20.079894   28839 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0805 23:13:20.084108   28839 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0805 23:13:20.579920   28839 round_trippers.go:463] GET https://192.168.39.57:8443/api/v1/nodes/ha-044175-m03
	I0805 23:13:20.579943   28839 round_trippers.go:469] Request Headers:
	I0805 23:13:20.579951   28839 round_trippers.go:473]     Accept: application/json, */*
	I0805 23:13:20.579957   28839 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0805 23:13:20.583983   28839 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0805 23:13:21.080098   28839 round_trippers.go:463] GET https://192.168.39.57:8443/api/v1/nodes/ha-044175-m03
	I0805 23:13:21.080119   28839 round_trippers.go:469] Request Headers:
	I0805 23:13:21.080127   28839 round_trippers.go:473]     Accept: application/json, */*
	I0805 23:13:21.080131   28839 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0805 23:13:21.084094   28839 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0805 23:13:21.580237   28839 round_trippers.go:463] GET https://192.168.39.57:8443/api/v1/nodes/ha-044175-m03
	I0805 23:13:21.580259   28839 round_trippers.go:469] Request Headers:
	I0805 23:13:21.580271   28839 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0805 23:13:21.580277   28839 round_trippers.go:473]     Accept: application/json, */*
	I0805 23:13:21.583732   28839 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0805 23:13:21.584540   28839 node_ready.go:53] node "ha-044175-m03" has status "Ready":"False"
	I0805 23:13:22.080311   28839 round_trippers.go:463] GET https://192.168.39.57:8443/api/v1/nodes/ha-044175-m03
	I0805 23:13:22.080333   28839 round_trippers.go:469] Request Headers:
	I0805 23:13:22.080341   28839 round_trippers.go:473]     Accept: application/json, */*
	I0805 23:13:22.080346   28839 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0805 23:13:22.083883   28839 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0805 23:13:22.579900   28839 round_trippers.go:463] GET https://192.168.39.57:8443/api/v1/nodes/ha-044175-m03
	I0805 23:13:22.579923   28839 round_trippers.go:469] Request Headers:
	I0805 23:13:22.579932   28839 round_trippers.go:473]     Accept: application/json, */*
	I0805 23:13:22.579937   28839 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0805 23:13:22.583407   28839 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0805 23:13:23.080584   28839 round_trippers.go:463] GET https://192.168.39.57:8443/api/v1/nodes/ha-044175-m03
	I0805 23:13:23.080607   28839 round_trippers.go:469] Request Headers:
	I0805 23:13:23.080619   28839 round_trippers.go:473]     Accept: application/json, */*
	I0805 23:13:23.080626   28839 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0805 23:13:23.084528   28839 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0805 23:13:23.579780   28839 round_trippers.go:463] GET https://192.168.39.57:8443/api/v1/nodes/ha-044175-m03
	I0805 23:13:23.579799   28839 round_trippers.go:469] Request Headers:
	I0805 23:13:23.579807   28839 round_trippers.go:473]     Accept: application/json, */*
	I0805 23:13:23.579810   28839 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0805 23:13:23.583231   28839 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0805 23:13:24.080315   28839 round_trippers.go:463] GET https://192.168.39.57:8443/api/v1/nodes/ha-044175-m03
	I0805 23:13:24.080341   28839 round_trippers.go:469] Request Headers:
	I0805 23:13:24.080352   28839 round_trippers.go:473]     Accept: application/json, */*
	I0805 23:13:24.080358   28839 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0805 23:13:24.083819   28839 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0805 23:13:24.084684   28839 node_ready.go:53] node "ha-044175-m03" has status "Ready":"False"
	I0805 23:13:24.579982   28839 round_trippers.go:463] GET https://192.168.39.57:8443/api/v1/nodes/ha-044175-m03
	I0805 23:13:24.580012   28839 round_trippers.go:469] Request Headers:
	I0805 23:13:24.580021   28839 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0805 23:13:24.580024   28839 round_trippers.go:473]     Accept: application/json, */*
	I0805 23:13:24.583939   28839 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0805 23:13:25.080552   28839 round_trippers.go:463] GET https://192.168.39.57:8443/api/v1/nodes/ha-044175-m03
	I0805 23:13:25.080578   28839 round_trippers.go:469] Request Headers:
	I0805 23:13:25.080589   28839 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0805 23:13:25.080595   28839 round_trippers.go:473]     Accept: application/json, */*
	I0805 23:13:25.084110   28839 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0805 23:13:25.580012   28839 round_trippers.go:463] GET https://192.168.39.57:8443/api/v1/nodes/ha-044175-m03
	I0805 23:13:25.580034   28839 round_trippers.go:469] Request Headers:
	I0805 23:13:25.580042   28839 round_trippers.go:473]     Accept: application/json, */*
	I0805 23:13:25.580047   28839 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0805 23:13:25.583938   28839 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0805 23:13:26.080137   28839 round_trippers.go:463] GET https://192.168.39.57:8443/api/v1/nodes/ha-044175-m03
	I0805 23:13:26.080160   28839 round_trippers.go:469] Request Headers:
	I0805 23:13:26.080168   28839 round_trippers.go:473]     Accept: application/json, */*
	I0805 23:13:26.080172   28839 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0805 23:13:26.083665   28839 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0805 23:13:26.580527   28839 round_trippers.go:463] GET https://192.168.39.57:8443/api/v1/nodes/ha-044175-m03
	I0805 23:13:26.580549   28839 round_trippers.go:469] Request Headers:
	I0805 23:13:26.580562   28839 round_trippers.go:473]     Accept: application/json, */*
	I0805 23:13:26.580570   28839 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0805 23:13:26.584322   28839 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0805 23:13:26.584974   28839 node_ready.go:53] node "ha-044175-m03" has status "Ready":"False"
	I0805 23:13:27.080291   28839 round_trippers.go:463] GET https://192.168.39.57:8443/api/v1/nodes/ha-044175-m03
	I0805 23:13:27.080312   28839 round_trippers.go:469] Request Headers:
	I0805 23:13:27.080320   28839 round_trippers.go:473]     Accept: application/json, */*
	I0805 23:13:27.080324   28839 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0805 23:13:27.083751   28839 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0805 23:13:27.579892   28839 round_trippers.go:463] GET https://192.168.39.57:8443/api/v1/nodes/ha-044175-m03
	I0805 23:13:27.579936   28839 round_trippers.go:469] Request Headers:
	I0805 23:13:27.579947   28839 round_trippers.go:473]     Accept: application/json, */*
	I0805 23:13:27.579955   28839 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0805 23:13:27.583568   28839 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0805 23:13:28.079763   28839 round_trippers.go:463] GET https://192.168.39.57:8443/api/v1/nodes/ha-044175-m03
	I0805 23:13:28.079787   28839 round_trippers.go:469] Request Headers:
	I0805 23:13:28.079799   28839 round_trippers.go:473]     Accept: application/json, */*
	I0805 23:13:28.079807   28839 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0805 23:13:28.083156   28839 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0805 23:13:28.580304   28839 round_trippers.go:463] GET https://192.168.39.57:8443/api/v1/nodes/ha-044175-m03
	I0805 23:13:28.580336   28839 round_trippers.go:469] Request Headers:
	I0805 23:13:28.580347   28839 round_trippers.go:473]     Accept: application/json, */*
	I0805 23:13:28.580353   28839 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0805 23:13:28.583927   28839 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0805 23:13:29.080135   28839 round_trippers.go:463] GET https://192.168.39.57:8443/api/v1/nodes/ha-044175-m03
	I0805 23:13:29.080165   28839 round_trippers.go:469] Request Headers:
	I0805 23:13:29.080177   28839 round_trippers.go:473]     Accept: application/json, */*
	I0805 23:13:29.080181   28839 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0805 23:13:29.083856   28839 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0805 23:13:29.084632   28839 node_ready.go:49] node "ha-044175-m03" has status "Ready":"True"
	I0805 23:13:29.084661   28839 node_ready.go:38] duration metric: took 18.505139296s for node "ha-044175-m03" to be "Ready" ...
	I0805 23:13:29.084670   28839 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0805 23:13:29.084724   28839 round_trippers.go:463] GET https://192.168.39.57:8443/api/v1/namespaces/kube-system/pods
	I0805 23:13:29.084733   28839 round_trippers.go:469] Request Headers:
	I0805 23:13:29.084740   28839 round_trippers.go:473]     Accept: application/json, */*
	I0805 23:13:29.084744   28839 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0805 23:13:29.092435   28839 round_trippers.go:574] Response Status: 200 OK in 7 milliseconds
	I0805 23:13:29.099556   28839 pod_ready.go:78] waiting up to 6m0s for pod "coredns-7db6d8ff4d-g9bml" in "kube-system" namespace to be "Ready" ...
	I0805 23:13:29.099630   28839 round_trippers.go:463] GET https://192.168.39.57:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-g9bml
	I0805 23:13:29.099636   28839 round_trippers.go:469] Request Headers:
	I0805 23:13:29.099643   28839 round_trippers.go:473]     Accept: application/json, */*
	I0805 23:13:29.099649   28839 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0805 23:13:29.102622   28839 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0805 23:13:29.103478   28839 round_trippers.go:463] GET https://192.168.39.57:8443/api/v1/nodes/ha-044175
	I0805 23:13:29.103497   28839 round_trippers.go:469] Request Headers:
	I0805 23:13:29.103507   28839 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0805 23:13:29.103513   28839 round_trippers.go:473]     Accept: application/json, */*
	I0805 23:13:29.106161   28839 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0805 23:13:29.106706   28839 pod_ready.go:92] pod "coredns-7db6d8ff4d-g9bml" in "kube-system" namespace has status "Ready":"True"
	I0805 23:13:29.106722   28839 pod_ready.go:81] duration metric: took 7.143366ms for pod "coredns-7db6d8ff4d-g9bml" in "kube-system" namespace to be "Ready" ...
	I0805 23:13:29.106731   28839 pod_ready.go:78] waiting up to 6m0s for pod "coredns-7db6d8ff4d-vzhst" in "kube-system" namespace to be "Ready" ...
	I0805 23:13:29.106779   28839 round_trippers.go:463] GET https://192.168.39.57:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-vzhst
	I0805 23:13:29.106786   28839 round_trippers.go:469] Request Headers:
	I0805 23:13:29.106793   28839 round_trippers.go:473]     Accept: application/json, */*
	I0805 23:13:29.106798   28839 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0805 23:13:29.109584   28839 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0805 23:13:29.110191   28839 round_trippers.go:463] GET https://192.168.39.57:8443/api/v1/nodes/ha-044175
	I0805 23:13:29.110204   28839 round_trippers.go:469] Request Headers:
	I0805 23:13:29.110210   28839 round_trippers.go:473]     Accept: application/json, */*
	I0805 23:13:29.110214   28839 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0805 23:13:29.112631   28839 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0805 23:13:29.113209   28839 pod_ready.go:92] pod "coredns-7db6d8ff4d-vzhst" in "kube-system" namespace has status "Ready":"True"
	I0805 23:13:29.113224   28839 pod_ready.go:81] duration metric: took 6.487633ms for pod "coredns-7db6d8ff4d-vzhst" in "kube-system" namespace to be "Ready" ...
	I0805 23:13:29.113232   28839 pod_ready.go:78] waiting up to 6m0s for pod "etcd-ha-044175" in "kube-system" namespace to be "Ready" ...
	I0805 23:13:29.113318   28839 round_trippers.go:463] GET https://192.168.39.57:8443/api/v1/namespaces/kube-system/pods/etcd-ha-044175
	I0805 23:13:29.113328   28839 round_trippers.go:469] Request Headers:
	I0805 23:13:29.113334   28839 round_trippers.go:473]     Accept: application/json, */*
	I0805 23:13:29.113339   28839 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0805 23:13:29.115566   28839 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0805 23:13:29.116073   28839 round_trippers.go:463] GET https://192.168.39.57:8443/api/v1/nodes/ha-044175
	I0805 23:13:29.116091   28839 round_trippers.go:469] Request Headers:
	I0805 23:13:29.116100   28839 round_trippers.go:473]     Accept: application/json, */*
	I0805 23:13:29.116107   28839 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0805 23:13:29.118160   28839 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0805 23:13:29.118547   28839 pod_ready.go:92] pod "etcd-ha-044175" in "kube-system" namespace has status "Ready":"True"
	I0805 23:13:29.118562   28839 pod_ready.go:81] duration metric: took 5.324674ms for pod "etcd-ha-044175" in "kube-system" namespace to be "Ready" ...
	I0805 23:13:29.118569   28839 pod_ready.go:78] waiting up to 6m0s for pod "etcd-ha-044175-m02" in "kube-system" namespace to be "Ready" ...
	I0805 23:13:29.118616   28839 round_trippers.go:463] GET https://192.168.39.57:8443/api/v1/namespaces/kube-system/pods/etcd-ha-044175-m02
	I0805 23:13:29.118624   28839 round_trippers.go:469] Request Headers:
	I0805 23:13:29.118630   28839 round_trippers.go:473]     Accept: application/json, */*
	I0805 23:13:29.118635   28839 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0805 23:13:29.120704   28839 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0805 23:13:29.121217   28839 round_trippers.go:463] GET https://192.168.39.57:8443/api/v1/nodes/ha-044175-m02
	I0805 23:13:29.121229   28839 round_trippers.go:469] Request Headers:
	I0805 23:13:29.121238   28839 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0805 23:13:29.121245   28839 round_trippers.go:473]     Accept: application/json, */*
	I0805 23:13:29.123792   28839 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0805 23:13:29.124416   28839 pod_ready.go:92] pod "etcd-ha-044175-m02" in "kube-system" namespace has status "Ready":"True"
	I0805 23:13:29.124436   28839 pod_ready.go:81] duration metric: took 5.859943ms for pod "etcd-ha-044175-m02" in "kube-system" namespace to be "Ready" ...
	I0805 23:13:29.124446   28839 pod_ready.go:78] waiting up to 6m0s for pod "etcd-ha-044175-m03" in "kube-system" namespace to be "Ready" ...
	I0805 23:13:29.280814   28839 request.go:629] Waited for 156.310918ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.57:8443/api/v1/namespaces/kube-system/pods/etcd-ha-044175-m03
	I0805 23:13:29.280906   28839 round_trippers.go:463] GET https://192.168.39.57:8443/api/v1/namespaces/kube-system/pods/etcd-ha-044175-m03
	I0805 23:13:29.280914   28839 round_trippers.go:469] Request Headers:
	I0805 23:13:29.280929   28839 round_trippers.go:473]     Accept: application/json, */*
	I0805 23:13:29.280937   28839 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0805 23:13:29.284543   28839 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0805 23:13:29.480962   28839 request.go:629] Waited for 195.348486ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.57:8443/api/v1/nodes/ha-044175-m03
	I0805 23:13:29.481052   28839 round_trippers.go:463] GET https://192.168.39.57:8443/api/v1/nodes/ha-044175-m03
	I0805 23:13:29.481063   28839 round_trippers.go:469] Request Headers:
	I0805 23:13:29.481073   28839 round_trippers.go:473]     Accept: application/json, */*
	I0805 23:13:29.481080   28839 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0805 23:13:29.484820   28839 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0805 23:13:29.485975   28839 pod_ready.go:92] pod "etcd-ha-044175-m03" in "kube-system" namespace has status "Ready":"True"
	I0805 23:13:29.485999   28839 pod_ready.go:81] duration metric: took 361.54109ms for pod "etcd-ha-044175-m03" in "kube-system" namespace to be "Ready" ...
	I0805 23:13:29.486022   28839 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-ha-044175" in "kube-system" namespace to be "Ready" ...
	I0805 23:13:29.680405   28839 request.go:629] Waited for 194.309033ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.57:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-044175
	I0805 23:13:29.680483   28839 round_trippers.go:463] GET https://192.168.39.57:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-044175
	I0805 23:13:29.680492   28839 round_trippers.go:469] Request Headers:
	I0805 23:13:29.680500   28839 round_trippers.go:473]     Accept: application/json, */*
	I0805 23:13:29.680504   28839 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0805 23:13:29.683658   28839 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0805 23:13:29.880892   28839 request.go:629] Waited for 196.365769ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.57:8443/api/v1/nodes/ha-044175
	I0805 23:13:29.880954   28839 round_trippers.go:463] GET https://192.168.39.57:8443/api/v1/nodes/ha-044175
	I0805 23:13:29.880959   28839 round_trippers.go:469] Request Headers:
	I0805 23:13:29.880966   28839 round_trippers.go:473]     Accept: application/json, */*
	I0805 23:13:29.880970   28839 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0805 23:13:29.884441   28839 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0805 23:13:29.885042   28839 pod_ready.go:92] pod "kube-apiserver-ha-044175" in "kube-system" namespace has status "Ready":"True"
	I0805 23:13:29.885058   28839 pod_ready.go:81] duration metric: took 399.024942ms for pod "kube-apiserver-ha-044175" in "kube-system" namespace to be "Ready" ...
	I0805 23:13:29.885068   28839 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-ha-044175-m02" in "kube-system" namespace to be "Ready" ...
	I0805 23:13:30.080128   28839 request.go:629] Waited for 194.999097ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.57:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-044175-m02
	I0805 23:13:30.080227   28839 round_trippers.go:463] GET https://192.168.39.57:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-044175-m02
	I0805 23:13:30.080238   28839 round_trippers.go:469] Request Headers:
	I0805 23:13:30.080250   28839 round_trippers.go:473]     Accept: application/json, */*
	I0805 23:13:30.080257   28839 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0805 23:13:30.083834   28839 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0805 23:13:30.281049   28839 request.go:629] Waited for 196.344278ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.57:8443/api/v1/nodes/ha-044175-m02
	I0805 23:13:30.281138   28839 round_trippers.go:463] GET https://192.168.39.57:8443/api/v1/nodes/ha-044175-m02
	I0805 23:13:30.281144   28839 round_trippers.go:469] Request Headers:
	I0805 23:13:30.281152   28839 round_trippers.go:473]     Accept: application/json, */*
	I0805 23:13:30.281158   28839 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0805 23:13:30.284838   28839 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0805 23:13:30.285672   28839 pod_ready.go:92] pod "kube-apiserver-ha-044175-m02" in "kube-system" namespace has status "Ready":"True"
	I0805 23:13:30.285699   28839 pod_ready.go:81] duration metric: took 400.624511ms for pod "kube-apiserver-ha-044175-m02" in "kube-system" namespace to be "Ready" ...
	I0805 23:13:30.285730   28839 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-ha-044175-m03" in "kube-system" namespace to be "Ready" ...
	I0805 23:13:30.480762   28839 request.go:629] Waited for 194.951381ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.57:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-044175-m03
	I0805 23:13:30.480873   28839 round_trippers.go:463] GET https://192.168.39.57:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-044175-m03
	I0805 23:13:30.480888   28839 round_trippers.go:469] Request Headers:
	I0805 23:13:30.480898   28839 round_trippers.go:473]     Accept: application/json, */*
	I0805 23:13:30.480904   28839 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0805 23:13:30.484624   28839 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0805 23:13:30.680546   28839 request.go:629] Waited for 195.355261ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.57:8443/api/v1/nodes/ha-044175-m03
	I0805 23:13:30.680624   28839 round_trippers.go:463] GET https://192.168.39.57:8443/api/v1/nodes/ha-044175-m03
	I0805 23:13:30.680635   28839 round_trippers.go:469] Request Headers:
	I0805 23:13:30.680649   28839 round_trippers.go:473]     Accept: application/json, */*
	I0805 23:13:30.680658   28839 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0805 23:13:30.684292   28839 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0805 23:13:30.685008   28839 pod_ready.go:92] pod "kube-apiserver-ha-044175-m03" in "kube-system" namespace has status "Ready":"True"
	I0805 23:13:30.685029   28839 pod_ready.go:81] duration metric: took 399.28781ms for pod "kube-apiserver-ha-044175-m03" in "kube-system" namespace to be "Ready" ...
	I0805 23:13:30.685040   28839 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-ha-044175" in "kube-system" namespace to be "Ready" ...
	I0805 23:13:30.880841   28839 request.go:629] Waited for 195.731489ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.57:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-044175
	I0805 23:13:30.880902   28839 round_trippers.go:463] GET https://192.168.39.57:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-044175
	I0805 23:13:30.880907   28839 round_trippers.go:469] Request Headers:
	I0805 23:13:30.880914   28839 round_trippers.go:473]     Accept: application/json, */*
	I0805 23:13:30.880918   28839 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0805 23:13:30.884894   28839 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0805 23:13:31.080949   28839 request.go:629] Waited for 195.363946ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.57:8443/api/v1/nodes/ha-044175
	I0805 23:13:31.081024   28839 round_trippers.go:463] GET https://192.168.39.57:8443/api/v1/nodes/ha-044175
	I0805 23:13:31.081029   28839 round_trippers.go:469] Request Headers:
	I0805 23:13:31.081036   28839 round_trippers.go:473]     Accept: application/json, */*
	I0805 23:13:31.081042   28839 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0805 23:13:31.084929   28839 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0805 23:13:31.086329   28839 pod_ready.go:92] pod "kube-controller-manager-ha-044175" in "kube-system" namespace has status "Ready":"True"
	I0805 23:13:31.086354   28839 pod_ready.go:81] duration metric: took 401.306409ms for pod "kube-controller-manager-ha-044175" in "kube-system" namespace to be "Ready" ...
	I0805 23:13:31.086365   28839 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-ha-044175-m02" in "kube-system" namespace to be "Ready" ...
	I0805 23:13:31.280695   28839 request.go:629] Waited for 194.261394ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.57:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-044175-m02
	I0805 23:13:31.280765   28839 round_trippers.go:463] GET https://192.168.39.57:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-044175-m02
	I0805 23:13:31.280773   28839 round_trippers.go:469] Request Headers:
	I0805 23:13:31.280783   28839 round_trippers.go:473]     Accept: application/json, */*
	I0805 23:13:31.280789   28839 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0805 23:13:31.284270   28839 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0805 23:13:31.480709   28839 request.go:629] Waited for 195.366262ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.57:8443/api/v1/nodes/ha-044175-m02
	I0805 23:13:31.480767   28839 round_trippers.go:463] GET https://192.168.39.57:8443/api/v1/nodes/ha-044175-m02
	I0805 23:13:31.480783   28839 round_trippers.go:469] Request Headers:
	I0805 23:13:31.480791   28839 round_trippers.go:473]     Accept: application/json, */*
	I0805 23:13:31.480799   28839 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0805 23:13:31.484380   28839 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0805 23:13:31.484963   28839 pod_ready.go:92] pod "kube-controller-manager-ha-044175-m02" in "kube-system" namespace has status "Ready":"True"
	I0805 23:13:31.484983   28839 pod_ready.go:81] duration metric: took 398.611698ms for pod "kube-controller-manager-ha-044175-m02" in "kube-system" namespace to be "Ready" ...
	I0805 23:13:31.484996   28839 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-ha-044175-m03" in "kube-system" namespace to be "Ready" ...
	I0805 23:13:31.680790   28839 request.go:629] Waited for 195.72273ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.57:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-044175-m03
	I0805 23:13:31.680880   28839 round_trippers.go:463] GET https://192.168.39.57:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-044175-m03
	I0805 23:13:31.680888   28839 round_trippers.go:469] Request Headers:
	I0805 23:13:31.680896   28839 round_trippers.go:473]     Accept: application/json, */*
	I0805 23:13:31.680900   28839 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0805 23:13:31.684619   28839 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0805 23:13:31.881003   28839 request.go:629] Waited for 195.355315ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.57:8443/api/v1/nodes/ha-044175-m03
	I0805 23:13:31.881055   28839 round_trippers.go:463] GET https://192.168.39.57:8443/api/v1/nodes/ha-044175-m03
	I0805 23:13:31.881060   28839 round_trippers.go:469] Request Headers:
	I0805 23:13:31.881070   28839 round_trippers.go:473]     Accept: application/json, */*
	I0805 23:13:31.881076   28839 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0805 23:13:31.884305   28839 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0805 23:13:31.884834   28839 pod_ready.go:92] pod "kube-controller-manager-ha-044175-m03" in "kube-system" namespace has status "Ready":"True"
	I0805 23:13:31.884856   28839 pod_ready.go:81] duration metric: took 399.851377ms for pod "kube-controller-manager-ha-044175-m03" in "kube-system" namespace to be "Ready" ...
	I0805 23:13:31.884869   28839 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-4ql5l" in "kube-system" namespace to be "Ready" ...
	I0805 23:13:32.081052   28839 request.go:629] Waited for 196.099083ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.57:8443/api/v1/namespaces/kube-system/pods/kube-proxy-4ql5l
	I0805 23:13:32.081124   28839 round_trippers.go:463] GET https://192.168.39.57:8443/api/v1/namespaces/kube-system/pods/kube-proxy-4ql5l
	I0805 23:13:32.081133   28839 round_trippers.go:469] Request Headers:
	I0805 23:13:32.081143   28839 round_trippers.go:473]     Accept: application/json, */*
	I0805 23:13:32.081152   28839 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0805 23:13:32.084619   28839 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0805 23:13:32.280904   28839 request.go:629] Waited for 195.368598ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.57:8443/api/v1/nodes/ha-044175-m03
	I0805 23:13:32.280957   28839 round_trippers.go:463] GET https://192.168.39.57:8443/api/v1/nodes/ha-044175-m03
	I0805 23:13:32.280967   28839 round_trippers.go:469] Request Headers:
	I0805 23:13:32.280986   28839 round_trippers.go:473]     Accept: application/json, */*
	I0805 23:13:32.280993   28839 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0805 23:13:32.284909   28839 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0805 23:13:32.285660   28839 pod_ready.go:92] pod "kube-proxy-4ql5l" in "kube-system" namespace has status "Ready":"True"
	I0805 23:13:32.285682   28839 pod_ready.go:81] duration metric: took 400.797372ms for pod "kube-proxy-4ql5l" in "kube-system" namespace to be "Ready" ...
	I0805 23:13:32.285696   28839 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-jfs9q" in "kube-system" namespace to be "Ready" ...
	I0805 23:13:32.480848   28839 request.go:629] Waited for 195.083319ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.57:8443/api/v1/namespaces/kube-system/pods/kube-proxy-jfs9q
	I0805 23:13:32.480926   28839 round_trippers.go:463] GET https://192.168.39.57:8443/api/v1/namespaces/kube-system/pods/kube-proxy-jfs9q
	I0805 23:13:32.480935   28839 round_trippers.go:469] Request Headers:
	I0805 23:13:32.480944   28839 round_trippers.go:473]     Accept: application/json, */*
	I0805 23:13:32.481009   28839 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0805 23:13:32.484539   28839 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0805 23:13:32.680586   28839 request.go:629] Waited for 195.338964ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.57:8443/api/v1/nodes/ha-044175-m02
	I0805 23:13:32.680659   28839 round_trippers.go:463] GET https://192.168.39.57:8443/api/v1/nodes/ha-044175-m02
	I0805 23:13:32.680667   28839 round_trippers.go:469] Request Headers:
	I0805 23:13:32.680678   28839 round_trippers.go:473]     Accept: application/json, */*
	I0805 23:13:32.680683   28839 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0805 23:13:32.684223   28839 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0805 23:13:32.684937   28839 pod_ready.go:92] pod "kube-proxy-jfs9q" in "kube-system" namespace has status "Ready":"True"
	I0805 23:13:32.684957   28839 pod_ready.go:81] duration metric: took 399.252196ms for pod "kube-proxy-jfs9q" in "kube-system" namespace to be "Ready" ...
	I0805 23:13:32.684972   28839 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-vj5sd" in "kube-system" namespace to be "Ready" ...
	I0805 23:13:32.881019   28839 request.go:629] Waited for 195.972084ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.57:8443/api/v1/namespaces/kube-system/pods/kube-proxy-vj5sd
	I0805 23:13:32.881108   28839 round_trippers.go:463] GET https://192.168.39.57:8443/api/v1/namespaces/kube-system/pods/kube-proxy-vj5sd
	I0805 23:13:32.881119   28839 round_trippers.go:469] Request Headers:
	I0805 23:13:32.881130   28839 round_trippers.go:473]     Accept: application/json, */*
	I0805 23:13:32.881140   28839 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0805 23:13:32.884904   28839 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0805 23:13:33.080296   28839 request.go:629] Waited for 194.285753ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.57:8443/api/v1/nodes/ha-044175
	I0805 23:13:33.080383   28839 round_trippers.go:463] GET https://192.168.39.57:8443/api/v1/nodes/ha-044175
	I0805 23:13:33.080389   28839 round_trippers.go:469] Request Headers:
	I0805 23:13:33.080397   28839 round_trippers.go:473]     Accept: application/json, */*
	I0805 23:13:33.080404   28839 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0805 23:13:33.083997   28839 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0805 23:13:33.084894   28839 pod_ready.go:92] pod "kube-proxy-vj5sd" in "kube-system" namespace has status "Ready":"True"
	I0805 23:13:33.084914   28839 pod_ready.go:81] duration metric: took 399.929086ms for pod "kube-proxy-vj5sd" in "kube-system" namespace to be "Ready" ...
	I0805 23:13:33.084923   28839 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-ha-044175" in "kube-system" namespace to be "Ready" ...
	I0805 23:13:33.281061   28839 request.go:629] Waited for 196.079497ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.57:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-044175
	I0805 23:13:33.281153   28839 round_trippers.go:463] GET https://192.168.39.57:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-044175
	I0805 23:13:33.281161   28839 round_trippers.go:469] Request Headers:
	I0805 23:13:33.281170   28839 round_trippers.go:473]     Accept: application/json, */*
	I0805 23:13:33.281175   28839 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0805 23:13:33.284680   28839 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0805 23:13:33.480553   28839 request.go:629] Waited for 195.084005ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.57:8443/api/v1/nodes/ha-044175
	I0805 23:13:33.480611   28839 round_trippers.go:463] GET https://192.168.39.57:8443/api/v1/nodes/ha-044175
	I0805 23:13:33.480616   28839 round_trippers.go:469] Request Headers:
	I0805 23:13:33.480624   28839 round_trippers.go:473]     Accept: application/json, */*
	I0805 23:13:33.480628   28839 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0805 23:13:33.484136   28839 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0805 23:13:33.484921   28839 pod_ready.go:92] pod "kube-scheduler-ha-044175" in "kube-system" namespace has status "Ready":"True"
	I0805 23:13:33.484940   28839 pod_ready.go:81] duration metric: took 400.010367ms for pod "kube-scheduler-ha-044175" in "kube-system" namespace to be "Ready" ...
	I0805 23:13:33.484952   28839 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-ha-044175-m02" in "kube-system" namespace to be "Ready" ...
	I0805 23:13:33.681072   28839 request.go:629] Waited for 196.05614ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.57:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-044175-m02
	I0805 23:13:33.681148   28839 round_trippers.go:463] GET https://192.168.39.57:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-044175-m02
	I0805 23:13:33.681155   28839 round_trippers.go:469] Request Headers:
	I0805 23:13:33.681166   28839 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0805 23:13:33.681173   28839 round_trippers.go:473]     Accept: application/json, */*
	I0805 23:13:33.684559   28839 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0805 23:13:33.880566   28839 request.go:629] Waited for 195.388243ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.57:8443/api/v1/nodes/ha-044175-m02
	I0805 23:13:33.880634   28839 round_trippers.go:463] GET https://192.168.39.57:8443/api/v1/nodes/ha-044175-m02
	I0805 23:13:33.880641   28839 round_trippers.go:469] Request Headers:
	I0805 23:13:33.880649   28839 round_trippers.go:473]     Accept: application/json, */*
	I0805 23:13:33.880658   28839 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0805 23:13:33.885130   28839 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0805 23:13:33.886696   28839 pod_ready.go:92] pod "kube-scheduler-ha-044175-m02" in "kube-system" namespace has status "Ready":"True"
	I0805 23:13:33.886723   28839 pod_ready.go:81] duration metric: took 401.762075ms for pod "kube-scheduler-ha-044175-m02" in "kube-system" namespace to be "Ready" ...
	I0805 23:13:33.886737   28839 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-ha-044175-m03" in "kube-system" namespace to be "Ready" ...
	I0805 23:13:34.080694   28839 request.go:629] Waited for 193.885489ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.57:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-044175-m03
	I0805 23:13:34.080770   28839 round_trippers.go:463] GET https://192.168.39.57:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-044175-m03
	I0805 23:13:34.080778   28839 round_trippers.go:469] Request Headers:
	I0805 23:13:34.080786   28839 round_trippers.go:473]     Accept: application/json, */*
	I0805 23:13:34.080790   28839 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0805 23:13:34.084489   28839 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0805 23:13:34.280530   28839 request.go:629] Waited for 195.363035ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.57:8443/api/v1/nodes/ha-044175-m03
	I0805 23:13:34.280583   28839 round_trippers.go:463] GET https://192.168.39.57:8443/api/v1/nodes/ha-044175-m03
	I0805 23:13:34.280587   28839 round_trippers.go:469] Request Headers:
	I0805 23:13:34.280595   28839 round_trippers.go:473]     Accept: application/json, */*
	I0805 23:13:34.280603   28839 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0805 23:13:34.284457   28839 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0805 23:13:34.285127   28839 pod_ready.go:92] pod "kube-scheduler-ha-044175-m03" in "kube-system" namespace has status "Ready":"True"
	I0805 23:13:34.285145   28839 pod_ready.go:81] duration metric: took 398.400816ms for pod "kube-scheduler-ha-044175-m03" in "kube-system" namespace to be "Ready" ...
	I0805 23:13:34.285156   28839 pod_ready.go:38] duration metric: took 5.200477021s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0805 23:13:34.285170   28839 api_server.go:52] waiting for apiserver process to appear ...
	I0805 23:13:34.285218   28839 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0805 23:13:34.301918   28839 api_server.go:72] duration metric: took 24.046418005s to wait for apiserver process to appear ...
	I0805 23:13:34.301950   28839 api_server.go:88] waiting for apiserver healthz status ...
	I0805 23:13:34.301973   28839 api_server.go:253] Checking apiserver healthz at https://192.168.39.57:8443/healthz ...
	I0805 23:13:34.309670   28839 api_server.go:279] https://192.168.39.57:8443/healthz returned 200:
	ok
	I0805 23:13:34.309729   28839 round_trippers.go:463] GET https://192.168.39.57:8443/version
	I0805 23:13:34.309736   28839 round_trippers.go:469] Request Headers:
	I0805 23:13:34.309744   28839 round_trippers.go:473]     Accept: application/json, */*
	I0805 23:13:34.309752   28839 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0805 23:13:34.310981   28839 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0805 23:13:34.311038   28839 api_server.go:141] control plane version: v1.30.3
	I0805 23:13:34.311074   28839 api_server.go:131] duration metric: took 9.116905ms to wait for apiserver health ...
	I0805 23:13:34.311088   28839 system_pods.go:43] waiting for kube-system pods to appear ...
	I0805 23:13:34.480516   28839 request.go:629] Waited for 169.354206ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.57:8443/api/v1/namespaces/kube-system/pods
	I0805 23:13:34.480585   28839 round_trippers.go:463] GET https://192.168.39.57:8443/api/v1/namespaces/kube-system/pods
	I0805 23:13:34.480593   28839 round_trippers.go:469] Request Headers:
	I0805 23:13:34.480603   28839 round_trippers.go:473]     Accept: application/json, */*
	I0805 23:13:34.480614   28839 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0805 23:13:34.488406   28839 round_trippers.go:574] Response Status: 200 OK in 7 milliseconds
	I0805 23:13:34.494649   28839 system_pods.go:59] 24 kube-system pods found
	I0805 23:13:34.494678   28839 system_pods.go:61] "coredns-7db6d8ff4d-g9bml" [fd474413-e416-48db-a7bf-f3c40675819b] Running
	I0805 23:13:34.494682   28839 system_pods.go:61] "coredns-7db6d8ff4d-vzhst" [f9c09745-be29-4403-9e7d-f9e4eaae5cac] Running
	I0805 23:13:34.494688   28839 system_pods.go:61] "etcd-ha-044175" [f9008d52-5a0c-4a6b-9cdf-7df18dd78752] Running
	I0805 23:13:34.494692   28839 system_pods.go:61] "etcd-ha-044175-m02" [773f42be-f8b5-47f0-bcd0-36bd6ae24bab] Running
	I0805 23:13:34.494695   28839 system_pods.go:61] "etcd-ha-044175-m03" [5704b0d2-6558-4321-9443-e4c7827bbd39] Running
	I0805 23:13:34.494698   28839 system_pods.go:61] "kindnet-hqhgc" [de6b28dc-79ea-43af-868e-e32180dcd5f2] Running
	I0805 23:13:34.494701   28839 system_pods.go:61] "kindnet-mc7wf" [c0635f1a-e26d-47b6-98f3-675d6e0b8acc] Running
	I0805 23:13:34.494705   28839 system_pods.go:61] "kindnet-xqx4z" [8455705e-b140-4f1e-abff-6a71bbb5415f] Running
	I0805 23:13:34.494708   28839 system_pods.go:61] "kube-apiserver-ha-044175" [4e39654d-531d-4cf4-b4a9-beeada8e8d05] Running
	I0805 23:13:34.494711   28839 system_pods.go:61] "kube-apiserver-ha-044175-m02" [06dfad00-f627-43cd-abea-c3a34d423964] Running
	I0805 23:13:34.494714   28839 system_pods.go:61] "kube-apiserver-ha-044175-m03" [d448c79d-6668-4d54-9814-2dac3eb5162d] Running
	I0805 23:13:34.494717   28839 system_pods.go:61] "kube-controller-manager-ha-044175" [d6f6d163-103f-4af4-976f-c255d1933bb2] Running
	I0805 23:13:34.494720   28839 system_pods.go:61] "kube-controller-manager-ha-044175-m02" [1bf050d3-1969-4ca1-89d3-f729989fd6b8] Running
	I0805 23:13:34.494723   28839 system_pods.go:61] "kube-controller-manager-ha-044175-m03" [ad0efa73-21d4-43e6-b1bd-9320ffd77f38] Running
	I0805 23:13:34.494726   28839 system_pods.go:61] "kube-proxy-4ql5l" [cf451989-77fc-462d-9826-54eeca4047e8] Running
	I0805 23:13:34.494729   28839 system_pods.go:61] "kube-proxy-jfs9q" [d8d0b4df-e1e1-4354-ba55-594dec7d1e89] Running
	I0805 23:13:34.494732   28839 system_pods.go:61] "kube-proxy-vj5sd" [d6c9cdcb-e1b7-44c8-a6e3-5e5aeb76ba03] Running
	I0805 23:13:34.494740   28839 system_pods.go:61] "kube-scheduler-ha-044175" [41c96a32-1b26-4e05-a21a-48c4fd913b9f] Running
	I0805 23:13:34.494742   28839 system_pods.go:61] "kube-scheduler-ha-044175-m02" [8e41f86c-0b86-40be-a524-fbae6283693d] Running
	I0805 23:13:34.494745   28839 system_pods.go:61] "kube-scheduler-ha-044175-m03" [e9faa567-8329-4fc5-a135-2851a03672a6] Running
	I0805 23:13:34.494748   28839 system_pods.go:61] "kube-vip-ha-044175" [505ff885-b8a0-48bd-8d1e-81e4583b48af] Running
	I0805 23:13:34.494753   28839 system_pods.go:61] "kube-vip-ha-044175-m02" [ffbecaef-6482-4c4e-8268-4b66e4799be5] Running
	I0805 23:13:34.494756   28839 system_pods.go:61] "kube-vip-ha-044175-m03" [6defc4ea-8441-46e2-ac1a-0ab55290431c] Running
	I0805 23:13:34.494758   28839 system_pods.go:61] "storage-provisioner" [d30d1a5b-cfbe-4de6-a964-75c32e5dbf62] Running
	I0805 23:13:34.494764   28839 system_pods.go:74] duration metric: took 183.668198ms to wait for pod list to return data ...
	I0805 23:13:34.494774   28839 default_sa.go:34] waiting for default service account to be created ...
	I0805 23:13:34.680796   28839 request.go:629] Waited for 185.959448ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.57:8443/api/v1/namespaces/default/serviceaccounts
	I0805 23:13:34.680853   28839 round_trippers.go:463] GET https://192.168.39.57:8443/api/v1/namespaces/default/serviceaccounts
	I0805 23:13:34.680858   28839 round_trippers.go:469] Request Headers:
	I0805 23:13:34.680865   28839 round_trippers.go:473]     Accept: application/json, */*
	I0805 23:13:34.680868   28839 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0805 23:13:34.684549   28839 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0805 23:13:34.684672   28839 default_sa.go:45] found service account: "default"
	I0805 23:13:34.684685   28839 default_sa.go:55] duration metric: took 189.905927ms for default service account to be created ...
	I0805 23:13:34.684694   28839 system_pods.go:116] waiting for k8s-apps to be running ...
	I0805 23:13:34.881112   28839 request.go:629] Waited for 196.358612ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.57:8443/api/v1/namespaces/kube-system/pods
	I0805 23:13:34.881179   28839 round_trippers.go:463] GET https://192.168.39.57:8443/api/v1/namespaces/kube-system/pods
	I0805 23:13:34.881186   28839 round_trippers.go:469] Request Headers:
	I0805 23:13:34.881196   28839 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0805 23:13:34.881202   28839 round_trippers.go:473]     Accept: application/json, */*
	I0805 23:13:34.888776   28839 round_trippers.go:574] Response Status: 200 OK in 7 milliseconds
	I0805 23:13:34.895122   28839 system_pods.go:86] 24 kube-system pods found
	I0805 23:13:34.895149   28839 system_pods.go:89] "coredns-7db6d8ff4d-g9bml" [fd474413-e416-48db-a7bf-f3c40675819b] Running
	I0805 23:13:34.895155   28839 system_pods.go:89] "coredns-7db6d8ff4d-vzhst" [f9c09745-be29-4403-9e7d-f9e4eaae5cac] Running
	I0805 23:13:34.895159   28839 system_pods.go:89] "etcd-ha-044175" [f9008d52-5a0c-4a6b-9cdf-7df18dd78752] Running
	I0805 23:13:34.895163   28839 system_pods.go:89] "etcd-ha-044175-m02" [773f42be-f8b5-47f0-bcd0-36bd6ae24bab] Running
	I0805 23:13:34.895167   28839 system_pods.go:89] "etcd-ha-044175-m03" [5704b0d2-6558-4321-9443-e4c7827bbd39] Running
	I0805 23:13:34.895171   28839 system_pods.go:89] "kindnet-hqhgc" [de6b28dc-79ea-43af-868e-e32180dcd5f2] Running
	I0805 23:13:34.895175   28839 system_pods.go:89] "kindnet-mc7wf" [c0635f1a-e26d-47b6-98f3-675d6e0b8acc] Running
	I0805 23:13:34.895179   28839 system_pods.go:89] "kindnet-xqx4z" [8455705e-b140-4f1e-abff-6a71bbb5415f] Running
	I0805 23:13:34.895183   28839 system_pods.go:89] "kube-apiserver-ha-044175" [4e39654d-531d-4cf4-b4a9-beeada8e8d05] Running
	I0805 23:13:34.895188   28839 system_pods.go:89] "kube-apiserver-ha-044175-m02" [06dfad00-f627-43cd-abea-c3a34d423964] Running
	I0805 23:13:34.895192   28839 system_pods.go:89] "kube-apiserver-ha-044175-m03" [d448c79d-6668-4d54-9814-2dac3eb5162d] Running
	I0805 23:13:34.895196   28839 system_pods.go:89] "kube-controller-manager-ha-044175" [d6f6d163-103f-4af4-976f-c255d1933bb2] Running
	I0805 23:13:34.895200   28839 system_pods.go:89] "kube-controller-manager-ha-044175-m02" [1bf050d3-1969-4ca1-89d3-f729989fd6b8] Running
	I0805 23:13:34.895204   28839 system_pods.go:89] "kube-controller-manager-ha-044175-m03" [ad0efa73-21d4-43e6-b1bd-9320ffd77f38] Running
	I0805 23:13:34.895209   28839 system_pods.go:89] "kube-proxy-4ql5l" [cf451989-77fc-462d-9826-54eeca4047e8] Running
	I0805 23:13:34.895213   28839 system_pods.go:89] "kube-proxy-jfs9q" [d8d0b4df-e1e1-4354-ba55-594dec7d1e89] Running
	I0805 23:13:34.895218   28839 system_pods.go:89] "kube-proxy-vj5sd" [d6c9cdcb-e1b7-44c8-a6e3-5e5aeb76ba03] Running
	I0805 23:13:34.895222   28839 system_pods.go:89] "kube-scheduler-ha-044175" [41c96a32-1b26-4e05-a21a-48c4fd913b9f] Running
	I0805 23:13:34.895228   28839 system_pods.go:89] "kube-scheduler-ha-044175-m02" [8e41f86c-0b86-40be-a524-fbae6283693d] Running
	I0805 23:13:34.895231   28839 system_pods.go:89] "kube-scheduler-ha-044175-m03" [e9faa567-8329-4fc5-a135-2851a03672a6] Running
	I0805 23:13:34.895237   28839 system_pods.go:89] "kube-vip-ha-044175" [505ff885-b8a0-48bd-8d1e-81e4583b48af] Running
	I0805 23:13:34.895241   28839 system_pods.go:89] "kube-vip-ha-044175-m02" [ffbecaef-6482-4c4e-8268-4b66e4799be5] Running
	I0805 23:13:34.895247   28839 system_pods.go:89] "kube-vip-ha-044175-m03" [6defc4ea-8441-46e2-ac1a-0ab55290431c] Running
	I0805 23:13:34.895250   28839 system_pods.go:89] "storage-provisioner" [d30d1a5b-cfbe-4de6-a964-75c32e5dbf62] Running
	I0805 23:13:34.895256   28839 system_pods.go:126] duration metric: took 210.557395ms to wait for k8s-apps to be running ...
	I0805 23:13:34.895264   28839 system_svc.go:44] waiting for kubelet service to be running ....
	I0805 23:13:34.895308   28839 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0805 23:13:34.911311   28839 system_svc.go:56] duration metric: took 16.041336ms WaitForService to wait for kubelet
	I0805 23:13:34.911336   28839 kubeadm.go:582] duration metric: took 24.655841277s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0805 23:13:34.911355   28839 node_conditions.go:102] verifying NodePressure condition ...
	I0805 23:13:35.080788   28839 request.go:629] Waited for 169.357422ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.57:8443/api/v1/nodes
	I0805 23:13:35.080855   28839 round_trippers.go:463] GET https://192.168.39.57:8443/api/v1/nodes
	I0805 23:13:35.080893   28839 round_trippers.go:469] Request Headers:
	I0805 23:13:35.080916   28839 round_trippers.go:473]     Accept: application/json, */*
	I0805 23:13:35.080929   28839 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0805 23:13:35.084961   28839 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0805 23:13:35.086423   28839 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0805 23:13:35.086445   28839 node_conditions.go:123] node cpu capacity is 2
	I0805 23:13:35.086461   28839 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0805 23:13:35.086467   28839 node_conditions.go:123] node cpu capacity is 2
	I0805 23:13:35.086474   28839 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0805 23:13:35.086481   28839 node_conditions.go:123] node cpu capacity is 2
	I0805 23:13:35.086488   28839 node_conditions.go:105] duration metric: took 175.127143ms to run NodePressure ...
	I0805 23:13:35.086506   28839 start.go:241] waiting for startup goroutines ...
	I0805 23:13:35.086533   28839 start.go:255] writing updated cluster config ...
	I0805 23:13:35.086868   28839 ssh_runner.go:195] Run: rm -f paused
	I0805 23:13:35.138880   28839 start.go:600] kubectl: 1.30.3, cluster: 1.30.3 (minor skew: 0)
	I0805 23:13:35.140884   28839 out.go:177] * Done! kubectl is now configured to use "ha-044175" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	Aug 05 23:17:15 ha-044175 crio[684]: time="2024-08-05 23:17:15.582135616Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1722899835582099661,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:154770,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=a8ec807d-4452-4e2b-954c-387e3f11d30c name=/runtime.v1.ImageService/ImageFsInfo
	Aug 05 23:17:15 ha-044175 crio[684]: time="2024-08-05 23:17:15.585970554Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=d2743e02-7c1b-4c0a-8e1b-6ca7d87c609c name=/runtime.v1.RuntimeService/ListContainers
	Aug 05 23:17:15 ha-044175 crio[684]: time="2024-08-05 23:17:15.586031806Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=d2743e02-7c1b-4c0a-8e1b-6ca7d87c609c name=/runtime.v1.RuntimeService/ListContainers
	Aug 05 23:17:15 ha-044175 crio[684]: time="2024-08-05 23:17:15.586256498Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:14f7140ac408890dd788c7a9d6a9857531edad86ff751157ac035e6ab0d4afdc,PodSandboxId:1bf94d816bd6b0f9325f20c0b2453330291a5dfa79448419ddd925a97f951bb9,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1722899618925179407,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-fc5497c4f-wmfql,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: bfc8bad7-d43d-4beb-991e-339a4ce96ab5,},Annotations:map[string]string{io.kubernetes.container.hash: fc00d50e,io.kubernetes.container.restartCount: 0,io.kubernetes.c
ontainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5e8f17a7a758ce7d69c780273e3653b03bc4c01767911d236cad9862a3337e50,PodSandboxId:5d4208cbe441324fb59633dbd487e1e04ee180f1f9763a207a4979e68a4ab71e,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1722899473852759909,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d30d1a5b-cfbe-4de6-a964-75c32e5dbf62,},Annotations:map[string]string{io.kubernetes.container.hash: 4378961a,io.kubernetes.container.restartCount: 0,io.kubernetes.container.ter
minationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4617bbebfc992da16ee550b4c2c74a6d4c58299fe2518f6d24c3a10b1e02c941,PodSandboxId:449b4adbddbde16b1d8ca1645ef0b728416e504b57b2e560589ffd060ad34e4a,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1722899473857623130,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-g9bml,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: fd474413-e416-48db-a7bf-f3c40675819b,},Annotations:map[string]string{io.kubernetes.container.hash: 1bd67db4,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"na
me\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e65205c398221a15eecea1ec1092d54f364a44886b05149400c7be5ffafc3285,PodSandboxId:0df1c00cbbb9d6891997d631537dd7662e552d8dca3cea20f0b653ed34f6f7bb,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1722899473821870209,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-vzhst,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f9c09745-be
29-4403-9e7d-f9e4eaae5cac,},Annotations:map[string]string{io.kubernetes.container.hash: 1a8c310a,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:97fa319bea82614cab7525f9052bcc8a09fad765b260045dbf0d0fa0ca0290b2,PodSandboxId:4f369251bc6de76b6eba2d8a6404cb53a6bcba17f58bd09854de9edd65d080fa,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:docker.io/kindest/kindnetd@sha256:4067b91686869e19bac601aec305ba55d2e74cdcb91347869bfb4fd3a26cd3c3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:917d7814b9b5b870a04ef7bbee5414485a7ba8385be9c26e1df27463f6fb6557,State:CO
NTAINER_RUNNING,CreatedAt:1722899461696934419,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-xqx4z,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8455705e-b140-4f1e-abff-6a71bbb5415f,},Annotations:map[string]string{io.kubernetes.container.hash: 4a9283b6,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:04c382fd4a32fe8685a6f643ecf7a291e4d542c2223975f9df92991fe566b12a,PodSandboxId:b7b77d3f5c8a24f9906eb41c479b7254cd21f7c4d0c34b7014bdfa5f666df829,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,State:CONTAINER_RUNNING,CreatedAt:172289945
7757340037,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-vj5sd,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d6c9cdcb-e1b7-44c8-a6e3-5e5aeb76ba03,},Annotations:map[string]string{io.kubernetes.container.hash: a40979c9,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:40fc9655d4bc3a83cded30a0628a93c01856e1db81e027d8d131004479df9ed3,PodSandboxId:8ece168043c14c199a06a5ef7db680c0d579fe87db735e94a6522f616365372e,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:0,},Image:&ImageSpec{Image:ghcr.io/kube-vip/kube-vip@sha256:360f0c5d02322075cc80edb9e4e0d2171e941e55072184f1f902203fafc81d0f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12,State:CONTAINER_RUNNING,CreatedAt:17228994417
23968430,Labels:map[string]string{io.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-044175,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 26033e5e6fae3c18f82268d3b219e4ab,},Annotations:map[string]string{io.kubernetes.container.hash: 3d55e6dc,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0c90a080943378c8bb82560d92b4399ff4ea03ab68d06f0de21852e1df609090,PodSandboxId:f0615d6a6ed3b0a919333497ebf049ca31c007ff3340b12a0a3b89c149d2558f,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,State:CONTAINER_RUNNING,CreatedAt:1722899438261300658,Labels:map[string]string
{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-044175,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5280d6dbae40883a34349dd31a13a779,},Annotations:map[string]string{io.kubernetes.container.hash: bd2d1b8f,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b0893967672c7dc591bbcf220e56601b8a46fc11f07e63adbadaddec59ec1803,PodSandboxId:c7f5da3aca5fb3bac198b9144677aac33c3f5317946dad29f46e726a35d2c596,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1722899438287785506,Labels:map[string]string{io.kubernetes.container.name:
etcd,io.kubernetes.pod.name: etcd-ha-044175,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 47fd3d59fe4024c671f4b57dbae12a83,},Annotations:map[string]string{io.kubernetes.container.hash: fa9a7bc3,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2a85f2254a23cdec7e89ff8de2e31b06ddf2853808330965760217f1fd834004,PodSandboxId:57dd6eb50740256e4db3c59d0c1d850b0ba784d01abbeb7f8ea139160576fc43,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,State:CONTAINER_RUNNING,CreatedAt:1722899438266855231,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: ku
be-scheduler-ha-044175,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 87091e6c521c934e57911d0cd84fc454,},Annotations:map[string]string{io.kubernetes.container.hash: 7337c8d9,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:52e65ab51d03f5a6abf04b86a788a251259de2c7971b7f676c0b5c5eb33e5849,PodSandboxId:41084305e84434e5136bb133632d08d27b3092395382f9508528787851465c5c,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,State:CONTAINER_RUNNING,CreatedAt:1722899438199945652,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-
controller-manager-ha-044175,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: de889c914a63f88b5552d92d7c04005b,},Annotations:map[string]string{io.kubernetes.container.hash: bcb4918f,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=d2743e02-7c1b-4c0a-8e1b-6ca7d87c609c name=/runtime.v1.RuntimeService/ListContainers
	Aug 05 23:17:15 ha-044175 crio[684]: time="2024-08-05 23:17:15.628466845Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=a0da8499-8796-4c48-b3a1-9d94089e595b name=/runtime.v1.RuntimeService/Version
	Aug 05 23:17:15 ha-044175 crio[684]: time="2024-08-05 23:17:15.628562416Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=a0da8499-8796-4c48-b3a1-9d94089e595b name=/runtime.v1.RuntimeService/Version
	Aug 05 23:17:15 ha-044175 crio[684]: time="2024-08-05 23:17:15.629605426Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=c5ca5f8a-21fe-4af7-adb6-c6d3a19d9e0b name=/runtime.v1.ImageService/ImageFsInfo
	Aug 05 23:17:15 ha-044175 crio[684]: time="2024-08-05 23:17:15.630059946Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1722899835630038076,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:154770,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=c5ca5f8a-21fe-4af7-adb6-c6d3a19d9e0b name=/runtime.v1.ImageService/ImageFsInfo
	Aug 05 23:17:15 ha-044175 crio[684]: time="2024-08-05 23:17:15.630610247Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=4f7894db-81d2-4183-be12-aec780662e0b name=/runtime.v1.RuntimeService/ListContainers
	Aug 05 23:17:15 ha-044175 crio[684]: time="2024-08-05 23:17:15.630664647Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=4f7894db-81d2-4183-be12-aec780662e0b name=/runtime.v1.RuntimeService/ListContainers
	Aug 05 23:17:15 ha-044175 crio[684]: time="2024-08-05 23:17:15.630909741Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:14f7140ac408890dd788c7a9d6a9857531edad86ff751157ac035e6ab0d4afdc,PodSandboxId:1bf94d816bd6b0f9325f20c0b2453330291a5dfa79448419ddd925a97f951bb9,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1722899618925179407,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-fc5497c4f-wmfql,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: bfc8bad7-d43d-4beb-991e-339a4ce96ab5,},Annotations:map[string]string{io.kubernetes.container.hash: fc00d50e,io.kubernetes.container.restartCount: 0,io.kubernetes.c
ontainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5e8f17a7a758ce7d69c780273e3653b03bc4c01767911d236cad9862a3337e50,PodSandboxId:5d4208cbe441324fb59633dbd487e1e04ee180f1f9763a207a4979e68a4ab71e,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1722899473852759909,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d30d1a5b-cfbe-4de6-a964-75c32e5dbf62,},Annotations:map[string]string{io.kubernetes.container.hash: 4378961a,io.kubernetes.container.restartCount: 0,io.kubernetes.container.ter
minationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4617bbebfc992da16ee550b4c2c74a6d4c58299fe2518f6d24c3a10b1e02c941,PodSandboxId:449b4adbddbde16b1d8ca1645ef0b728416e504b57b2e560589ffd060ad34e4a,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1722899473857623130,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-g9bml,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: fd474413-e416-48db-a7bf-f3c40675819b,},Annotations:map[string]string{io.kubernetes.container.hash: 1bd67db4,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"na
me\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e65205c398221a15eecea1ec1092d54f364a44886b05149400c7be5ffafc3285,PodSandboxId:0df1c00cbbb9d6891997d631537dd7662e552d8dca3cea20f0b653ed34f6f7bb,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1722899473821870209,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-vzhst,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f9c09745-be
29-4403-9e7d-f9e4eaae5cac,},Annotations:map[string]string{io.kubernetes.container.hash: 1a8c310a,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:97fa319bea82614cab7525f9052bcc8a09fad765b260045dbf0d0fa0ca0290b2,PodSandboxId:4f369251bc6de76b6eba2d8a6404cb53a6bcba17f58bd09854de9edd65d080fa,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:docker.io/kindest/kindnetd@sha256:4067b91686869e19bac601aec305ba55d2e74cdcb91347869bfb4fd3a26cd3c3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:917d7814b9b5b870a04ef7bbee5414485a7ba8385be9c26e1df27463f6fb6557,State:CO
NTAINER_RUNNING,CreatedAt:1722899461696934419,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-xqx4z,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8455705e-b140-4f1e-abff-6a71bbb5415f,},Annotations:map[string]string{io.kubernetes.container.hash: 4a9283b6,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:04c382fd4a32fe8685a6f643ecf7a291e4d542c2223975f9df92991fe566b12a,PodSandboxId:b7b77d3f5c8a24f9906eb41c479b7254cd21f7c4d0c34b7014bdfa5f666df829,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,State:CONTAINER_RUNNING,CreatedAt:172289945
7757340037,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-vj5sd,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d6c9cdcb-e1b7-44c8-a6e3-5e5aeb76ba03,},Annotations:map[string]string{io.kubernetes.container.hash: a40979c9,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:40fc9655d4bc3a83cded30a0628a93c01856e1db81e027d8d131004479df9ed3,PodSandboxId:8ece168043c14c199a06a5ef7db680c0d579fe87db735e94a6522f616365372e,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:0,},Image:&ImageSpec{Image:ghcr.io/kube-vip/kube-vip@sha256:360f0c5d02322075cc80edb9e4e0d2171e941e55072184f1f902203fafc81d0f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12,State:CONTAINER_RUNNING,CreatedAt:17228994417
23968430,Labels:map[string]string{io.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-044175,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 26033e5e6fae3c18f82268d3b219e4ab,},Annotations:map[string]string{io.kubernetes.container.hash: 3d55e6dc,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0c90a080943378c8bb82560d92b4399ff4ea03ab68d06f0de21852e1df609090,PodSandboxId:f0615d6a6ed3b0a919333497ebf049ca31c007ff3340b12a0a3b89c149d2558f,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,State:CONTAINER_RUNNING,CreatedAt:1722899438261300658,Labels:map[string]string
{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-044175,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5280d6dbae40883a34349dd31a13a779,},Annotations:map[string]string{io.kubernetes.container.hash: bd2d1b8f,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b0893967672c7dc591bbcf220e56601b8a46fc11f07e63adbadaddec59ec1803,PodSandboxId:c7f5da3aca5fb3bac198b9144677aac33c3f5317946dad29f46e726a35d2c596,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1722899438287785506,Labels:map[string]string{io.kubernetes.container.name:
etcd,io.kubernetes.pod.name: etcd-ha-044175,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 47fd3d59fe4024c671f4b57dbae12a83,},Annotations:map[string]string{io.kubernetes.container.hash: fa9a7bc3,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2a85f2254a23cdec7e89ff8de2e31b06ddf2853808330965760217f1fd834004,PodSandboxId:57dd6eb50740256e4db3c59d0c1d850b0ba784d01abbeb7f8ea139160576fc43,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,State:CONTAINER_RUNNING,CreatedAt:1722899438266855231,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: ku
be-scheduler-ha-044175,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 87091e6c521c934e57911d0cd84fc454,},Annotations:map[string]string{io.kubernetes.container.hash: 7337c8d9,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:52e65ab51d03f5a6abf04b86a788a251259de2c7971b7f676c0b5c5eb33e5849,PodSandboxId:41084305e84434e5136bb133632d08d27b3092395382f9508528787851465c5c,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,State:CONTAINER_RUNNING,CreatedAt:1722899438199945652,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-
controller-manager-ha-044175,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: de889c914a63f88b5552d92d7c04005b,},Annotations:map[string]string{io.kubernetes.container.hash: bcb4918f,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=4f7894db-81d2-4183-be12-aec780662e0b name=/runtime.v1.RuntimeService/ListContainers
	Aug 05 23:17:15 ha-044175 crio[684]: time="2024-08-05 23:17:15.677598837Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=fb1a621b-2a64-4cfd-b88f-bf4714ed25b3 name=/runtime.v1.RuntimeService/Version
	Aug 05 23:17:15 ha-044175 crio[684]: time="2024-08-05 23:17:15.677682726Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=fb1a621b-2a64-4cfd-b88f-bf4714ed25b3 name=/runtime.v1.RuntimeService/Version
	Aug 05 23:17:15 ha-044175 crio[684]: time="2024-08-05 23:17:15.678792228Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=367a5b4e-8d13-4f0b-85b5-632396c60820 name=/runtime.v1.ImageService/ImageFsInfo
	Aug 05 23:17:15 ha-044175 crio[684]: time="2024-08-05 23:17:15.679574804Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1722899835679548475,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:154770,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=367a5b4e-8d13-4f0b-85b5-632396c60820 name=/runtime.v1.ImageService/ImageFsInfo
	Aug 05 23:17:15 ha-044175 crio[684]: time="2024-08-05 23:17:15.680093873Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=3a7de800-2107-491a-815b-bfc1c61de314 name=/runtime.v1.RuntimeService/ListContainers
	Aug 05 23:17:15 ha-044175 crio[684]: time="2024-08-05 23:17:15.680253266Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=3a7de800-2107-491a-815b-bfc1c61de314 name=/runtime.v1.RuntimeService/ListContainers
	Aug 05 23:17:15 ha-044175 crio[684]: time="2024-08-05 23:17:15.680613088Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:14f7140ac408890dd788c7a9d6a9857531edad86ff751157ac035e6ab0d4afdc,PodSandboxId:1bf94d816bd6b0f9325f20c0b2453330291a5dfa79448419ddd925a97f951bb9,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1722899618925179407,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-fc5497c4f-wmfql,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: bfc8bad7-d43d-4beb-991e-339a4ce96ab5,},Annotations:map[string]string{io.kubernetes.container.hash: fc00d50e,io.kubernetes.container.restartCount: 0,io.kubernetes.c
ontainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5e8f17a7a758ce7d69c780273e3653b03bc4c01767911d236cad9862a3337e50,PodSandboxId:5d4208cbe441324fb59633dbd487e1e04ee180f1f9763a207a4979e68a4ab71e,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1722899473852759909,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d30d1a5b-cfbe-4de6-a964-75c32e5dbf62,},Annotations:map[string]string{io.kubernetes.container.hash: 4378961a,io.kubernetes.container.restartCount: 0,io.kubernetes.container.ter
minationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4617bbebfc992da16ee550b4c2c74a6d4c58299fe2518f6d24c3a10b1e02c941,PodSandboxId:449b4adbddbde16b1d8ca1645ef0b728416e504b57b2e560589ffd060ad34e4a,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1722899473857623130,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-g9bml,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: fd474413-e416-48db-a7bf-f3c40675819b,},Annotations:map[string]string{io.kubernetes.container.hash: 1bd67db4,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"na
me\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e65205c398221a15eecea1ec1092d54f364a44886b05149400c7be5ffafc3285,PodSandboxId:0df1c00cbbb9d6891997d631537dd7662e552d8dca3cea20f0b653ed34f6f7bb,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1722899473821870209,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-vzhst,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f9c09745-be
29-4403-9e7d-f9e4eaae5cac,},Annotations:map[string]string{io.kubernetes.container.hash: 1a8c310a,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:97fa319bea82614cab7525f9052bcc8a09fad765b260045dbf0d0fa0ca0290b2,PodSandboxId:4f369251bc6de76b6eba2d8a6404cb53a6bcba17f58bd09854de9edd65d080fa,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:docker.io/kindest/kindnetd@sha256:4067b91686869e19bac601aec305ba55d2e74cdcb91347869bfb4fd3a26cd3c3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:917d7814b9b5b870a04ef7bbee5414485a7ba8385be9c26e1df27463f6fb6557,State:CO
NTAINER_RUNNING,CreatedAt:1722899461696934419,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-xqx4z,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8455705e-b140-4f1e-abff-6a71bbb5415f,},Annotations:map[string]string{io.kubernetes.container.hash: 4a9283b6,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:04c382fd4a32fe8685a6f643ecf7a291e4d542c2223975f9df92991fe566b12a,PodSandboxId:b7b77d3f5c8a24f9906eb41c479b7254cd21f7c4d0c34b7014bdfa5f666df829,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,State:CONTAINER_RUNNING,CreatedAt:172289945
7757340037,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-vj5sd,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d6c9cdcb-e1b7-44c8-a6e3-5e5aeb76ba03,},Annotations:map[string]string{io.kubernetes.container.hash: a40979c9,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:40fc9655d4bc3a83cded30a0628a93c01856e1db81e027d8d131004479df9ed3,PodSandboxId:8ece168043c14c199a06a5ef7db680c0d579fe87db735e94a6522f616365372e,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:0,},Image:&ImageSpec{Image:ghcr.io/kube-vip/kube-vip@sha256:360f0c5d02322075cc80edb9e4e0d2171e941e55072184f1f902203fafc81d0f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12,State:CONTAINER_RUNNING,CreatedAt:17228994417
23968430,Labels:map[string]string{io.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-044175,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 26033e5e6fae3c18f82268d3b219e4ab,},Annotations:map[string]string{io.kubernetes.container.hash: 3d55e6dc,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0c90a080943378c8bb82560d92b4399ff4ea03ab68d06f0de21852e1df609090,PodSandboxId:f0615d6a6ed3b0a919333497ebf049ca31c007ff3340b12a0a3b89c149d2558f,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,State:CONTAINER_RUNNING,CreatedAt:1722899438261300658,Labels:map[string]string
{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-044175,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5280d6dbae40883a34349dd31a13a779,},Annotations:map[string]string{io.kubernetes.container.hash: bd2d1b8f,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b0893967672c7dc591bbcf220e56601b8a46fc11f07e63adbadaddec59ec1803,PodSandboxId:c7f5da3aca5fb3bac198b9144677aac33c3f5317946dad29f46e726a35d2c596,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1722899438287785506,Labels:map[string]string{io.kubernetes.container.name:
etcd,io.kubernetes.pod.name: etcd-ha-044175,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 47fd3d59fe4024c671f4b57dbae12a83,},Annotations:map[string]string{io.kubernetes.container.hash: fa9a7bc3,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2a85f2254a23cdec7e89ff8de2e31b06ddf2853808330965760217f1fd834004,PodSandboxId:57dd6eb50740256e4db3c59d0c1d850b0ba784d01abbeb7f8ea139160576fc43,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,State:CONTAINER_RUNNING,CreatedAt:1722899438266855231,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: ku
be-scheduler-ha-044175,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 87091e6c521c934e57911d0cd84fc454,},Annotations:map[string]string{io.kubernetes.container.hash: 7337c8d9,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:52e65ab51d03f5a6abf04b86a788a251259de2c7971b7f676c0b5c5eb33e5849,PodSandboxId:41084305e84434e5136bb133632d08d27b3092395382f9508528787851465c5c,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,State:CONTAINER_RUNNING,CreatedAt:1722899438199945652,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-
controller-manager-ha-044175,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: de889c914a63f88b5552d92d7c04005b,},Annotations:map[string]string{io.kubernetes.container.hash: bcb4918f,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=3a7de800-2107-491a-815b-bfc1c61de314 name=/runtime.v1.RuntimeService/ListContainers
	Aug 05 23:17:15 ha-044175 crio[684]: time="2024-08-05 23:17:15.728137942Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=a4149dc7-3681-446e-aa67-2fa38953cea3 name=/runtime.v1.RuntimeService/Version
	Aug 05 23:17:15 ha-044175 crio[684]: time="2024-08-05 23:17:15.728213653Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=a4149dc7-3681-446e-aa67-2fa38953cea3 name=/runtime.v1.RuntimeService/Version
	Aug 05 23:17:15 ha-044175 crio[684]: time="2024-08-05 23:17:15.730147309Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=8c248b14-a064-4d02-98b6-6088aa62045e name=/runtime.v1.ImageService/ImageFsInfo
	Aug 05 23:17:15 ha-044175 crio[684]: time="2024-08-05 23:17:15.730684805Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1722899835730658467,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:154770,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=8c248b14-a064-4d02-98b6-6088aa62045e name=/runtime.v1.ImageService/ImageFsInfo
	Aug 05 23:17:15 ha-044175 crio[684]: time="2024-08-05 23:17:15.731433224Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=42e2abbc-4241-4a90-ad2b-8c1a72415d21 name=/runtime.v1.RuntimeService/ListContainers
	Aug 05 23:17:15 ha-044175 crio[684]: time="2024-08-05 23:17:15.731510197Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=42e2abbc-4241-4a90-ad2b-8c1a72415d21 name=/runtime.v1.RuntimeService/ListContainers
	Aug 05 23:17:15 ha-044175 crio[684]: time="2024-08-05 23:17:15.731756563Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:14f7140ac408890dd788c7a9d6a9857531edad86ff751157ac035e6ab0d4afdc,PodSandboxId:1bf94d816bd6b0f9325f20c0b2453330291a5dfa79448419ddd925a97f951bb9,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1722899618925179407,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-fc5497c4f-wmfql,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: bfc8bad7-d43d-4beb-991e-339a4ce96ab5,},Annotations:map[string]string{io.kubernetes.container.hash: fc00d50e,io.kubernetes.container.restartCount: 0,io.kubernetes.c
ontainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5e8f17a7a758ce7d69c780273e3653b03bc4c01767911d236cad9862a3337e50,PodSandboxId:5d4208cbe441324fb59633dbd487e1e04ee180f1f9763a207a4979e68a4ab71e,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1722899473852759909,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d30d1a5b-cfbe-4de6-a964-75c32e5dbf62,},Annotations:map[string]string{io.kubernetes.container.hash: 4378961a,io.kubernetes.container.restartCount: 0,io.kubernetes.container.ter
minationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4617bbebfc992da16ee550b4c2c74a6d4c58299fe2518f6d24c3a10b1e02c941,PodSandboxId:449b4adbddbde16b1d8ca1645ef0b728416e504b57b2e560589ffd060ad34e4a,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1722899473857623130,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-g9bml,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: fd474413-e416-48db-a7bf-f3c40675819b,},Annotations:map[string]string{io.kubernetes.container.hash: 1bd67db4,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"na
me\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e65205c398221a15eecea1ec1092d54f364a44886b05149400c7be5ffafc3285,PodSandboxId:0df1c00cbbb9d6891997d631537dd7662e552d8dca3cea20f0b653ed34f6f7bb,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1722899473821870209,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-vzhst,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f9c09745-be
29-4403-9e7d-f9e4eaae5cac,},Annotations:map[string]string{io.kubernetes.container.hash: 1a8c310a,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:97fa319bea82614cab7525f9052bcc8a09fad765b260045dbf0d0fa0ca0290b2,PodSandboxId:4f369251bc6de76b6eba2d8a6404cb53a6bcba17f58bd09854de9edd65d080fa,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:docker.io/kindest/kindnetd@sha256:4067b91686869e19bac601aec305ba55d2e74cdcb91347869bfb4fd3a26cd3c3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:917d7814b9b5b870a04ef7bbee5414485a7ba8385be9c26e1df27463f6fb6557,State:CO
NTAINER_RUNNING,CreatedAt:1722899461696934419,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-xqx4z,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8455705e-b140-4f1e-abff-6a71bbb5415f,},Annotations:map[string]string{io.kubernetes.container.hash: 4a9283b6,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:04c382fd4a32fe8685a6f643ecf7a291e4d542c2223975f9df92991fe566b12a,PodSandboxId:b7b77d3f5c8a24f9906eb41c479b7254cd21f7c4d0c34b7014bdfa5f666df829,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,State:CONTAINER_RUNNING,CreatedAt:172289945
7757340037,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-vj5sd,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d6c9cdcb-e1b7-44c8-a6e3-5e5aeb76ba03,},Annotations:map[string]string{io.kubernetes.container.hash: a40979c9,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:40fc9655d4bc3a83cded30a0628a93c01856e1db81e027d8d131004479df9ed3,PodSandboxId:8ece168043c14c199a06a5ef7db680c0d579fe87db735e94a6522f616365372e,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:0,},Image:&ImageSpec{Image:ghcr.io/kube-vip/kube-vip@sha256:360f0c5d02322075cc80edb9e4e0d2171e941e55072184f1f902203fafc81d0f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12,State:CONTAINER_RUNNING,CreatedAt:17228994417
23968430,Labels:map[string]string{io.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-044175,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 26033e5e6fae3c18f82268d3b219e4ab,},Annotations:map[string]string{io.kubernetes.container.hash: 3d55e6dc,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0c90a080943378c8bb82560d92b4399ff4ea03ab68d06f0de21852e1df609090,PodSandboxId:f0615d6a6ed3b0a919333497ebf049ca31c007ff3340b12a0a3b89c149d2558f,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,State:CONTAINER_RUNNING,CreatedAt:1722899438261300658,Labels:map[string]string
{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-044175,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5280d6dbae40883a34349dd31a13a779,},Annotations:map[string]string{io.kubernetes.container.hash: bd2d1b8f,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b0893967672c7dc591bbcf220e56601b8a46fc11f07e63adbadaddec59ec1803,PodSandboxId:c7f5da3aca5fb3bac198b9144677aac33c3f5317946dad29f46e726a35d2c596,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1722899438287785506,Labels:map[string]string{io.kubernetes.container.name:
etcd,io.kubernetes.pod.name: etcd-ha-044175,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 47fd3d59fe4024c671f4b57dbae12a83,},Annotations:map[string]string{io.kubernetes.container.hash: fa9a7bc3,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2a85f2254a23cdec7e89ff8de2e31b06ddf2853808330965760217f1fd834004,PodSandboxId:57dd6eb50740256e4db3c59d0c1d850b0ba784d01abbeb7f8ea139160576fc43,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,State:CONTAINER_RUNNING,CreatedAt:1722899438266855231,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: ku
be-scheduler-ha-044175,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 87091e6c521c934e57911d0cd84fc454,},Annotations:map[string]string{io.kubernetes.container.hash: 7337c8d9,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:52e65ab51d03f5a6abf04b86a788a251259de2c7971b7f676c0b5c5eb33e5849,PodSandboxId:41084305e84434e5136bb133632d08d27b3092395382f9508528787851465c5c,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,State:CONTAINER_RUNNING,CreatedAt:1722899438199945652,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-
controller-manager-ha-044175,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: de889c914a63f88b5552d92d7c04005b,},Annotations:map[string]string{io.kubernetes.container.hash: bcb4918f,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=42e2abbc-4241-4a90-ad2b-8c1a72415d21 name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                 CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	14f7140ac4088       gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335   3 minutes ago       Running             busybox                   0                   1bf94d816bd6b       busybox-fc5497c4f-wmfql
	4617bbebfc992       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4                                      6 minutes ago       Running             coredns                   0                   449b4adbddbde       coredns-7db6d8ff4d-g9bml
	5e8f17a7a758c       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                      6 minutes ago       Running             storage-provisioner       0                   5d4208cbe4413       storage-provisioner
	e65205c398221       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4                                      6 minutes ago       Running             coredns                   0                   0df1c00cbbb9d       coredns-7db6d8ff4d-vzhst
	97fa319bea826       docker.io/kindest/kindnetd@sha256:4067b91686869e19bac601aec305ba55d2e74cdcb91347869bfb4fd3a26cd3c3    6 minutes ago       Running             kindnet-cni               0                   4f369251bc6de       kindnet-xqx4z
	04c382fd4a32f       55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1                                      6 minutes ago       Running             kube-proxy                0                   b7b77d3f5c8a2       kube-proxy-vj5sd
	40fc9655d4bc3       ghcr.io/kube-vip/kube-vip@sha256:360f0c5d02322075cc80edb9e4e0d2171e941e55072184f1f902203fafc81d0f     6 minutes ago       Running             kube-vip                  0                   8ece168043c14       kube-vip-ha-044175
	b0893967672c7       3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899                                      6 minutes ago       Running             etcd                      0                   c7f5da3aca5fb       etcd-ha-044175
	2a85f2254a23c       3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2                                      6 minutes ago       Running             kube-scheduler            0                   57dd6eb507402       kube-scheduler-ha-044175
	0c90a08094337       1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d                                      6 minutes ago       Running             kube-apiserver            0                   f0615d6a6ed3b       kube-apiserver-ha-044175
	52e65ab51d03f       76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e                                      6 minutes ago       Running             kube-controller-manager   0                   41084305e8443       kube-controller-manager-ha-044175
	
	
	==> coredns [4617bbebfc992da16ee550b4c2c74a6d4c58299fe2518f6d24c3a10b1e02c941] <==
	[INFO] 10.244.0.4:60064 - 5 "PTR IN 148.40.75.147.in-addr.arpa. udp 44 false 512" NXDOMAIN qr,rd,ra 44 0.002005742s
	[INFO] 10.244.2.2:39716 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000202045s
	[INFO] 10.244.2.2:55066 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000286931s
	[INFO] 10.244.2.2:34830 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000174873s
	[INFO] 10.244.1.2:45895 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000157477s
	[INFO] 10.244.1.2:49930 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000144865s
	[INFO] 10.244.1.2:45888 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000155945s
	[INFO] 10.244.1.2:59948 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000081002s
	[INFO] 10.244.0.4:36231 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000095138s
	[INFO] 10.244.0.4:40536 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000228107s
	[INFO] 10.244.0.4:41374 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.001409689s
	[INFO] 10.244.0.4:38989 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000065121s
	[INFO] 10.244.0.4:40466 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000080252s
	[INFO] 10.244.2.2:50462 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000101194s
	[INFO] 10.244.2.2:37087 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000067854s
	[INFO] 10.244.1.2:33354 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.00011049s
	[INFO] 10.244.1.2:46378 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000081449s
	[INFO] 10.244.1.2:35178 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000059454s
	[INFO] 10.244.0.4:36998 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000070571s
	[INFO] 10.244.0.4:58448 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000039944s
	[INFO] 10.244.2.2:44511 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000351558s
	[INFO] 10.244.2.2:49689 - 5 "PTR IN 1.39.168.192.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.000125275s
	[INFO] 10.244.1.2:53510 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000125157s
	[INFO] 10.244.0.4:59119 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000073623s
	[INFO] 10.244.0.4:42575 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000124164s
	
	
	==> coredns [e65205c398221a15eecea1ec1092d54f364a44886b05149400c7be5ffafc3285] <==
	[INFO] 10.244.1.2:48958 - 5 "PTR IN 148.40.75.147.in-addr.arpa. udp 44 false 512" NXDOMAIN qr,rd,ra 44 0.001900539s
	[INFO] 10.244.2.2:35523 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000199288s
	[INFO] 10.244.2.2:44169 - 3 "AAAA IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.003510071s
	[INFO] 10.244.2.2:35265 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.00015048s
	[INFO] 10.244.2.2:55592 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.003101906s
	[INFO] 10.244.2.2:56153 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.00013893s
	[INFO] 10.244.1.2:33342 - 3 "AAAA IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.001850863s
	[INFO] 10.244.1.2:42287 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000148733s
	[INFO] 10.244.1.2:54735 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000100517s
	[INFO] 10.244.1.2:59789 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.001317452s
	[INFO] 10.244.0.4:40404 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000074048s
	[INFO] 10.244.0.4:48828 - 3 "AAAA IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.002066965s
	[INFO] 10.244.0.4:45447 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000152682s
	[INFO] 10.244.2.2:44344 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000146254s
	[INFO] 10.244.2.2:44960 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000197937s
	[INFO] 10.244.1.2:46098 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000107825s
	[INFO] 10.244.0.4:53114 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000104641s
	[INFO] 10.244.0.4:55920 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000073557s
	[INFO] 10.244.2.2:36832 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.0001192s
	[INFO] 10.244.2.2:36836 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.00014154s
	[INFO] 10.244.1.2:35009 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.00021099s
	[INFO] 10.244.1.2:49630 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.00009192s
	[INFO] 10.244.1.2:49164 - 5 "PTR IN 1.39.168.192.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.000128354s
	[INFO] 10.244.0.4:33938 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000080255s
	[INFO] 10.244.0.4:34551 - 5 "PTR IN 1.39.168.192.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.000092007s
	
	
	==> describe nodes <==
	Name:               ha-044175
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-044175
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=a179f531dd2dbe55e0d6074abcbc378280f91bb4
	                    minikube.k8s.io/name=ha-044175
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_08_05T23_10_45_0700
	                    minikube.k8s.io/version=v1.33.1
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 05 Aug 2024 23:10:44 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-044175
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 05 Aug 2024 23:17:13 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 05 Aug 2024 23:13:48 +0000   Mon, 05 Aug 2024 23:10:44 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 05 Aug 2024 23:13:48 +0000   Mon, 05 Aug 2024 23:10:44 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 05 Aug 2024 23:13:48 +0000   Mon, 05 Aug 2024 23:10:44 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 05 Aug 2024 23:13:48 +0000   Mon, 05 Aug 2024 23:11:13 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.57
	  Hostname:    ha-044175
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 a7535c9f09f54963b658b49234079761
	  System UUID:                a7535c9f-09f5-4963-b658-b49234079761
	  Boot ID:                    97ae6699-97e9-4260-9f54-aa4546b6e1f0
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.30.3
	  Kube-Proxy Version:         v1.30.3
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (11 in total)
	  Namespace                   Name                                 CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                 ------------  ----------  ---------------  -------------  ---
	  default                     busybox-fc5497c4f-wmfql              0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         3m40s
	  kube-system                 coredns-7db6d8ff4d-g9bml             100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     6m19s
	  kube-system                 coredns-7db6d8ff4d-vzhst             100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     6m19s
	  kube-system                 etcd-ha-044175                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (4%!)(MISSING)       0 (0%!)(MISSING)         6m32s
	  kube-system                 kindnet-xqx4z                        100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (2%!)(MISSING)        50Mi (2%!)(MISSING)      6m19s
	  kube-system                 kube-apiserver-ha-044175             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         6m32s
	  kube-system                 kube-controller-manager-ha-044175    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         6m32s
	  kube-system                 kube-proxy-vj5sd                     0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         6m19s
	  kube-system                 kube-scheduler-ha-044175             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         6m32s
	  kube-system                 kube-vip-ha-044175                   0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         6m32s
	  kube-system                 storage-provisioner                  0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         6m18s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                950m (47%!)(MISSING)   100m (5%!)(MISSING)
	  memory             290Mi (13%!)(MISSING)  390Mi (18%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)       0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)       0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age    From             Message
	  ----    ------                   ----   ----             -------
	  Normal  Starting                 6m17s  kube-proxy       
	  Normal  Starting                 6m32s  kubelet          Starting kubelet.
	  Normal  NodeAllocatableEnforced  6m32s  kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  6m32s  kubelet          Node ha-044175 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    6m32s  kubelet          Node ha-044175 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     6m32s  kubelet          Node ha-044175 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           6m20s  node-controller  Node ha-044175 event: Registered Node ha-044175 in Controller
	  Normal  NodeReady                6m3s   kubelet          Node ha-044175 status is now: NodeReady
	  Normal  RegisteredNode           5m9s   node-controller  Node ha-044175 event: Registered Node ha-044175 in Controller
	  Normal  RegisteredNode           3m52s  node-controller  Node ha-044175 event: Registered Node ha-044175 in Controller
	
	
	Name:               ha-044175-m02
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-044175-m02
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=a179f531dd2dbe55e0d6074abcbc378280f91bb4
	                    minikube.k8s.io/name=ha-044175
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_08_05T23_11_52_0700
	                    minikube.k8s.io/version=v1.33.1
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 05 Aug 2024 23:11:49 +0000
	Taints:             node.kubernetes.io/unreachable:NoExecute
	                    node.kubernetes.io/unreachable:NoSchedule
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-044175-m02
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 05 Aug 2024 23:14:54 +0000
	Conditions:
	  Type             Status    LastHeartbeatTime                 LastTransitionTime                Reason              Message
	  ----             ------    -----------------                 ------------------                ------              -------
	  MemoryPressure   Unknown   Mon, 05 Aug 2024 23:13:52 +0000   Mon, 05 Aug 2024 23:15:34 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  DiskPressure     Unknown   Mon, 05 Aug 2024 23:13:52 +0000   Mon, 05 Aug 2024 23:15:34 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  PIDPressure      Unknown   Mon, 05 Aug 2024 23:13:52 +0000   Mon, 05 Aug 2024 23:15:34 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  Ready            Unknown   Mon, 05 Aug 2024 23:13:52 +0000   Mon, 05 Aug 2024 23:15:34 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	Addresses:
	  InternalIP:  192.168.39.112
	  Hostname:    ha-044175-m02
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 3b8a8f60868345a4bc1ba1393dbdecaf
	  System UUID:                3b8a8f60-8683-45a4-bc1b-a1393dbdecaf
	  Boot ID:                    fc606ffa-9f64-4457-a949-4b120e918d6b
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.30.3
	  Kube-Proxy Version:         v1.30.3
	PodCIDR:                      10.244.1.0/24
	PodCIDRs:                     10.244.1.0/24
	Non-terminated Pods:          (8 in total)
	  Namespace                   Name                                     CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                     ------------  ----------  ---------------  -------------  ---
	  default                     busybox-fc5497c4f-tpqpw                  0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         3m40s
	  kube-system                 etcd-ha-044175-m02                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (4%!)(MISSING)       0 (0%!)(MISSING)         5m24s
	  kube-system                 kindnet-hqhgc                            100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (2%!)(MISSING)        50Mi (2%!)(MISSING)      5m26s
	  kube-system                 kube-apiserver-ha-044175-m02             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         5m24s
	  kube-system                 kube-controller-manager-ha-044175-m02    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         5m22s
	  kube-system                 kube-proxy-jfs9q                         0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         5m26s
	  kube-system                 kube-scheduler-ha-044175-m02             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         5m26s
	  kube-system                 kube-vip-ha-044175-m02                   0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         5m21s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%!)(MISSING)  100m (5%!)(MISSING)
	  memory             150Mi (7%!)(MISSING)  50Mi (2%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                    From             Message
	  ----    ------                   ----                   ----             -------
	  Normal  Starting                 5m21s                  kube-proxy       
	  Normal  NodeAllocatableEnforced  5m27s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  5m26s (x8 over 5m27s)  kubelet          Node ha-044175-m02 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    5m26s (x8 over 5m27s)  kubelet          Node ha-044175-m02 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     5m26s (x7 over 5m27s)  kubelet          Node ha-044175-m02 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           5m25s                  node-controller  Node ha-044175-m02 event: Registered Node ha-044175-m02 in Controller
	  Normal  RegisteredNode           5m9s                   node-controller  Node ha-044175-m02 event: Registered Node ha-044175-m02 in Controller
	  Normal  RegisteredNode           3m52s                  node-controller  Node ha-044175-m02 event: Registered Node ha-044175-m02 in Controller
	  Normal  NodeNotReady             102s                   node-controller  Node ha-044175-m02 status is now: NodeNotReady
	
	
	Name:               ha-044175-m03
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-044175-m03
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=a179f531dd2dbe55e0d6074abcbc378280f91bb4
	                    minikube.k8s.io/name=ha-044175
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_08_05T23_13_09_0700
	                    minikube.k8s.io/version=v1.33.1
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 05 Aug 2024 23:13:06 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-044175-m03
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 05 Aug 2024 23:17:10 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 05 Aug 2024 23:14:07 +0000   Mon, 05 Aug 2024 23:13:06 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 05 Aug 2024 23:14:07 +0000   Mon, 05 Aug 2024 23:13:06 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 05 Aug 2024 23:14:07 +0000   Mon, 05 Aug 2024 23:13:06 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 05 Aug 2024 23:14:07 +0000   Mon, 05 Aug 2024 23:13:28 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.201
	  Hostname:    ha-044175-m03
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 37d1f61608a14177b68f3f2d22a59a87
	  System UUID:                37d1f616-08a1-4177-b68f-3f2d22a59a87
	  Boot ID:                    7e4c1f16-18ce-41f6-83cb-3892189ef49a
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.30.3
	  Kube-Proxy Version:         v1.30.3
	PodCIDR:                      10.244.2.0/24
	PodCIDRs:                     10.244.2.0/24
	Non-terminated Pods:          (8 in total)
	  Namespace                   Name                                     CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                     ------------  ----------  ---------------  -------------  ---
	  default                     busybox-fc5497c4f-fqp2t                  0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         3m40s
	  kube-system                 etcd-ha-044175-m03                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (4%!)(MISSING)       0 (0%!)(MISSING)         4m8s
	  kube-system                 kindnet-mc7wf                            100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (2%!)(MISSING)        50Mi (2%!)(MISSING)      4m10s
	  kube-system                 kube-apiserver-ha-044175-m03             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m8s
	  kube-system                 kube-controller-manager-ha-044175-m03    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m6s
	  kube-system                 kube-proxy-4ql5l                         0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m10s
	  kube-system                 kube-scheduler-ha-044175-m03             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m7s
	  kube-system                 kube-vip-ha-044175-m03                   0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m5s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%!)(MISSING)  100m (5%!)(MISSING)
	  memory             150Mi (7%!)(MISSING)  50Mi (2%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                    From             Message
	  ----    ------                   ----                   ----             -------
	  Normal  Starting                 4m4s                   kube-proxy       
	  Normal  RegisteredNode           4m10s                  node-controller  Node ha-044175-m03 event: Registered Node ha-044175-m03 in Controller
	  Normal  NodeHasSufficientMemory  4m10s (x8 over 4m10s)  kubelet          Node ha-044175-m03 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    4m10s (x8 over 4m10s)  kubelet          Node ha-044175-m03 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     4m10s (x7 over 4m10s)  kubelet          Node ha-044175-m03 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  4m10s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           4m9s                   node-controller  Node ha-044175-m03 event: Registered Node ha-044175-m03 in Controller
	  Normal  RegisteredNode           3m52s                  node-controller  Node ha-044175-m03 event: Registered Node ha-044175-m03 in Controller
	
	
	Name:               ha-044175-m04
	Roles:              <none>
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-044175-m04
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=a179f531dd2dbe55e0d6074abcbc378280f91bb4
	                    minikube.k8s.io/name=ha-044175
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_08_05T23_14_14_0700
	                    minikube.k8s.io/version=v1.33.1
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 05 Aug 2024 23:14:13 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-044175-m04
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 05 Aug 2024 23:17:07 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 05 Aug 2024 23:14:43 +0000   Mon, 05 Aug 2024 23:14:13 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 05 Aug 2024 23:14:43 +0000   Mon, 05 Aug 2024 23:14:13 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 05 Aug 2024 23:14:43 +0000   Mon, 05 Aug 2024 23:14:13 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 05 Aug 2024 23:14:43 +0000   Mon, 05 Aug 2024 23:14:34 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.228
	  Hostname:    ha-044175-m04
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 0d2536a5615e49c8bf2cb4a8d6f85b2f
	  System UUID:                0d2536a5-615e-49c8-bf2c-b4a8d6f85b2f
	  Boot ID:                    588f7741-6c69-4d39-a219-0c7b28545f45
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.30.3
	  Kube-Proxy Version:         v1.30.3
	PodCIDR:                      10.244.3.0/24
	PodCIDRs:                     10.244.3.0/24
	Non-terminated Pods:          (2 in total)
	  Namespace                   Name                CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                ------------  ----------  ---------------  -------------  ---
	  kube-system                 kindnet-2rpdm       100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (2%!)(MISSING)        50Mi (2%!)(MISSING)      3m1s
	  kube-system                 kube-proxy-r5567    0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         3m3s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests   Limits
	  --------           --------   ------
	  cpu                100m (5%!)(MISSING)  100m (5%!)(MISSING)
	  memory             50Mi (2%!)(MISSING)  50Mi (2%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)     0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)     0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                  From             Message
	  ----    ------                   ----                 ----             -------
	  Normal  Starting                 2m57s                kube-proxy       
	  Normal  NodeHasSufficientMemory  3m3s (x2 over 3m3s)  kubelet          Node ha-044175-m04 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    3m3s (x2 over 3m3s)  kubelet          Node ha-044175-m04 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     3m3s (x2 over 3m3s)  kubelet          Node ha-044175-m04 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  3m3s                 kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           3m2s                 node-controller  Node ha-044175-m04 event: Registered Node ha-044175-m04 in Controller
	  Normal  RegisteredNode           3m                   node-controller  Node ha-044175-m04 event: Registered Node ha-044175-m04 in Controller
	  Normal  RegisteredNode           2m59s                node-controller  Node ha-044175-m04 event: Registered Node ha-044175-m04 in Controller
	  Normal  NodeReady                2m42s                kubelet          Node ha-044175-m04 status is now: NodeReady
	
	
	==> dmesg <==
	[Aug 5 23:10] You have booted with nomodeset. This means your GPU drivers are DISABLED
	[  +0.000000] Any video related functionality will be severely degraded, and you may not even be able to suspend the system properly
	[  +0.000001] Unless you actually understand what nomodeset does, you should reboot without enabling it
	[  +0.051183] Spectre V2 : WARNING: Unprivileged eBPF is enabled with eIBRS on, data leaks possible via Spectre v2 BHB attacks!
	[  +0.040172] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended PCI configuration space under this bridge.
	[  +4.836800] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	[  +2.566464] systemd-fstab-generator[116]: Ignoring "noauto" option for root device
	[  +4.616407] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000006] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000003] NFSD: Unable to initialize client recovery tracking! (-2)
	[ +10.214810] systemd-fstab-generator[601]: Ignoring "noauto" option for root device
	[  +0.059894] kauditd_printk_skb: 1 callbacks suppressed
	[  +0.066481] systemd-fstab-generator[613]: Ignoring "noauto" option for root device
	[  +0.165121] systemd-fstab-generator[627]: Ignoring "noauto" option for root device
	[  +0.129651] systemd-fstab-generator[639]: Ignoring "noauto" option for root device
	[  +0.275605] systemd-fstab-generator[668]: Ignoring "noauto" option for root device
	[  +4.344469] systemd-fstab-generator[777]: Ignoring "noauto" option for root device
	[  +0.058179] kauditd_printk_skb: 130 callbacks suppressed
	[  +4.730128] systemd-fstab-generator[959]: Ignoring "noauto" option for root device
	[  +0.903161] kauditd_printk_skb: 46 callbacks suppressed
	[  +6.792303] systemd-fstab-generator[1383]: Ignoring "noauto" option for root device
	[  +0.087803] kauditd_printk_skb: 51 callbacks suppressed
	[ +13.188886] kauditd_printk_skb: 21 callbacks suppressed
	[Aug 5 23:11] kauditd_printk_skb: 35 callbacks suppressed
	[ +53.752834] kauditd_printk_skb: 24 callbacks suppressed
	
	
	==> etcd [b0893967672c7dc591bbcf220e56601b8a46fc11f07e63adbadaddec59ec1803] <==
	{"level":"warn","ts":"2024-08-05T23:17:15.574925Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"79ee2fa200dbf73d","from":"79ee2fa200dbf73d","remote-peer-id":"74b01d9147cbb35","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-08-05T23:17:15.645267Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"79ee2fa200dbf73d","from":"79ee2fa200dbf73d","remote-peer-id":"74b01d9147cbb35","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-08-05T23:17:15.675005Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"79ee2fa200dbf73d","from":"79ee2fa200dbf73d","remote-peer-id":"74b01d9147cbb35","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-08-05T23:17:15.788096Z","caller":"etcdserver/cluster_util.go:294","msg":"failed to reach the peer URL","address":"https://192.168.39.112:2380/version","remote-member-id":"74b01d9147cbb35","error":"Get \"https://192.168.39.112:2380/version\": dial tcp 192.168.39.112:2380: connect: no route to host"}
	{"level":"warn","ts":"2024-08-05T23:17:15.788152Z","caller":"etcdserver/cluster_util.go:158","msg":"failed to get version","remote-member-id":"74b01d9147cbb35","error":"Get \"https://192.168.39.112:2380/version\": dial tcp 192.168.39.112:2380: connect: no route to host"}
	{"level":"warn","ts":"2024-08-05T23:17:16.017532Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"79ee2fa200dbf73d","from":"79ee2fa200dbf73d","remote-peer-id":"74b01d9147cbb35","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-08-05T23:17:16.027488Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"79ee2fa200dbf73d","from":"79ee2fa200dbf73d","remote-peer-id":"74b01d9147cbb35","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-08-05T23:17:16.035971Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"79ee2fa200dbf73d","from":"79ee2fa200dbf73d","remote-peer-id":"74b01d9147cbb35","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-08-05T23:17:16.039943Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"79ee2fa200dbf73d","from":"79ee2fa200dbf73d","remote-peer-id":"74b01d9147cbb35","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-08-05T23:17:16.043771Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"79ee2fa200dbf73d","from":"79ee2fa200dbf73d","remote-peer-id":"74b01d9147cbb35","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-08-05T23:17:16.052888Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"79ee2fa200dbf73d","from":"79ee2fa200dbf73d","remote-peer-id":"74b01d9147cbb35","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-08-05T23:17:16.061533Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"79ee2fa200dbf73d","from":"79ee2fa200dbf73d","remote-peer-id":"74b01d9147cbb35","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-08-05T23:17:16.069965Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"79ee2fa200dbf73d","from":"79ee2fa200dbf73d","remote-peer-id":"74b01d9147cbb35","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-08-05T23:17:16.074522Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"79ee2fa200dbf73d","from":"79ee2fa200dbf73d","remote-peer-id":"74b01d9147cbb35","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-08-05T23:17:16.07471Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"79ee2fa200dbf73d","from":"79ee2fa200dbf73d","remote-peer-id":"74b01d9147cbb35","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-08-05T23:17:16.077798Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"79ee2fa200dbf73d","from":"79ee2fa200dbf73d","remote-peer-id":"74b01d9147cbb35","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-08-05T23:17:16.085094Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"79ee2fa200dbf73d","from":"79ee2fa200dbf73d","remote-peer-id":"74b01d9147cbb35","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-08-05T23:17:16.091625Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"79ee2fa200dbf73d","from":"79ee2fa200dbf73d","remote-peer-id":"74b01d9147cbb35","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-08-05T23:17:16.098749Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"79ee2fa200dbf73d","from":"79ee2fa200dbf73d","remote-peer-id":"74b01d9147cbb35","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-08-05T23:17:16.102159Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"79ee2fa200dbf73d","from":"79ee2fa200dbf73d","remote-peer-id":"74b01d9147cbb35","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-08-05T23:17:16.105134Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"79ee2fa200dbf73d","from":"79ee2fa200dbf73d","remote-peer-id":"74b01d9147cbb35","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-08-05T23:17:16.111473Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"79ee2fa200dbf73d","from":"79ee2fa200dbf73d","remote-peer-id":"74b01d9147cbb35","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-08-05T23:17:16.118668Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"79ee2fa200dbf73d","from":"79ee2fa200dbf73d","remote-peer-id":"74b01d9147cbb35","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-08-05T23:17:16.127442Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"79ee2fa200dbf73d","from":"79ee2fa200dbf73d","remote-peer-id":"74b01d9147cbb35","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-08-05T23:17:16.175549Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"79ee2fa200dbf73d","from":"79ee2fa200dbf73d","remote-peer-id":"74b01d9147cbb35","remote-peer-name":"pipeline","remote-peer-active":false}
	
	
	==> kernel <==
	 23:17:16 up 7 min,  0 users,  load average: 0.17, 0.25, 0.14
	Linux ha-044175 5.10.207 #1 SMP Mon Jul 29 15:19:02 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kindnet [97fa319bea82614cab7525f9052bcc8a09fad765b260045dbf0d0fa0ca0290b2] <==
	I0805 23:16:42.763152       1 main.go:322] Node ha-044175-m04 has CIDR [10.244.3.0/24] 
	I0805 23:16:52.761668       1 main.go:295] Handling node with IPs: map[192.168.39.228:{}]
	I0805 23:16:52.761721       1 main.go:322] Node ha-044175-m04 has CIDR [10.244.3.0/24] 
	I0805 23:16:52.761880       1 main.go:295] Handling node with IPs: map[192.168.39.57:{}]
	I0805 23:16:52.761910       1 main.go:299] handling current node
	I0805 23:16:52.761924       1 main.go:295] Handling node with IPs: map[192.168.39.112:{}]
	I0805 23:16:52.761929       1 main.go:322] Node ha-044175-m02 has CIDR [10.244.1.0/24] 
	I0805 23:16:52.761990       1 main.go:295] Handling node with IPs: map[192.168.39.201:{}]
	I0805 23:16:52.762011       1 main.go:322] Node ha-044175-m03 has CIDR [10.244.2.0/24] 
	I0805 23:17:02.758067       1 main.go:295] Handling node with IPs: map[192.168.39.57:{}]
	I0805 23:17:02.758117       1 main.go:299] handling current node
	I0805 23:17:02.758144       1 main.go:295] Handling node with IPs: map[192.168.39.112:{}]
	I0805 23:17:02.758156       1 main.go:322] Node ha-044175-m02 has CIDR [10.244.1.0/24] 
	I0805 23:17:02.758335       1 main.go:295] Handling node with IPs: map[192.168.39.201:{}]
	I0805 23:17:02.758361       1 main.go:322] Node ha-044175-m03 has CIDR [10.244.2.0/24] 
	I0805 23:17:02.758490       1 main.go:295] Handling node with IPs: map[192.168.39.228:{}]
	I0805 23:17:02.758511       1 main.go:322] Node ha-044175-m04 has CIDR [10.244.3.0/24] 
	I0805 23:17:12.757425       1 main.go:295] Handling node with IPs: map[192.168.39.112:{}]
	I0805 23:17:12.757557       1 main.go:322] Node ha-044175-m02 has CIDR [10.244.1.0/24] 
	I0805 23:17:12.757720       1 main.go:295] Handling node with IPs: map[192.168.39.201:{}]
	I0805 23:17:12.757745       1 main.go:322] Node ha-044175-m03 has CIDR [10.244.2.0/24] 
	I0805 23:17:12.757820       1 main.go:295] Handling node with IPs: map[192.168.39.228:{}]
	I0805 23:17:12.757839       1 main.go:322] Node ha-044175-m04 has CIDR [10.244.3.0/24] 
	I0805 23:17:12.757901       1 main.go:295] Handling node with IPs: map[192.168.39.57:{}]
	I0805 23:17:12.757920       1 main.go:299] handling current node
	
	
	==> kube-apiserver [0c90a080943378c8bb82560d92b4399ff4ea03ab68d06f0de21852e1df609090] <==
	I0805 23:10:44.553265       1 controller.go:615] quota admission added evaluator for: deployments.apps
	I0805 23:10:44.571802       1 alloc.go:330] "allocated clusterIPs" service="kube-system/kube-dns" clusterIPs={"IPv4":"10.96.0.10"}
	I0805 23:10:44.705277       1 controller.go:615] quota admission added evaluator for: daemonsets.apps
	I0805 23:10:56.995623       1 controller.go:615] quota admission added evaluator for: replicasets.apps
	I0805 23:10:57.087672       1 controller.go:615] quota admission added evaluator for: controllerrevisions.apps
	E0805 23:13:07.385530       1 finisher.go:175] FinishRequest: post-timeout activity - time-elapsed: 11.915µs, panicked: false, err: context canceled, panic-reason: <nil>
	E0805 23:13:07.386344       1 writers.go:122] apiserver was unable to write a JSON response: http: Handler timeout
	E0805 23:13:07.387157       1 status.go:71] apiserver received an error that is not an metav1.Status: &errors.errorString{s:"http: Handler timeout"}: http: Handler timeout
	E0805 23:13:07.388336       1 writers.go:135] apiserver was unable to write a fallback JSON response: http: Handler timeout
	E0805 23:13:07.388520       1 timeout.go:142] post-timeout activity - time-elapsed: 1.975722ms, POST "/api/v1/namespaces/kube-system/events" result: <nil>
	E0805 23:13:40.970037       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:59396: use of closed network connection
	E0805 23:13:41.351341       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:59424: use of closed network connection
	E0805 23:13:41.554982       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:59442: use of closed network connection
	E0805 23:13:41.744646       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:59466: use of closed network connection
	E0805 23:13:41.925027       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:59488: use of closed network connection
	E0805 23:13:42.116864       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:59512: use of closed network connection
	E0805 23:13:42.297555       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:59532: use of closed network connection
	E0805 23:13:42.533986       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:59540: use of closed network connection
	E0805 23:13:42.836007       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:59562: use of closed network connection
	E0805 23:13:43.015513       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:59590: use of closed network connection
	E0805 23:13:43.210146       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:59614: use of closed network connection
	E0805 23:13:43.384149       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:59630: use of closed network connection
	E0805 23:13:43.564630       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:59640: use of closed network connection
	E0805 23:13:43.739822       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:59656: use of closed network connection
	W0805 23:15:02.886672       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.168.39.201 192.168.39.57]
	
	
	==> kube-controller-manager [52e65ab51d03f5a6abf04b86a788a251259de2c7971b7f676c0b5c5eb33e5849] <==
	I0805 23:13:06.571041       1 range_allocator.go:381] "Set node PodCIDR" logger="node-ipam-controller" node="ha-044175-m03" podCIDRs=["10.244.2.0/24"]
	I0805 23:13:06.703870       1 node_lifecycle_controller.go:879] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="ha-044175-m03"
	I0805 23:13:36.099644       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="119.430442ms"
	I0805 23:13:36.261859       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="160.151474ms"
	I0805 23:13:36.460678       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="190.445386ms"
	E0805 23:13:36.460940       1 replica_set.go:557] sync "default/busybox-fc5497c4f" failed with Operation cannot be fulfilled on replicasets.apps "busybox-fc5497c4f": the object has been modified; please apply your changes to the latest version and try again
	I0805 23:13:36.461609       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="512.602µs"
	I0805 23:13:36.468238       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="65.566µs"
	I0805 23:13:36.594565       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="62.8598ms"
	I0805 23:13:36.594699       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="70.571µs"
	I0805 23:13:39.500676       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="17.430076ms"
	I0805 23:13:39.500785       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="51.167µs"
	I0805 23:13:39.752849       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="14.231938ms"
	I0805 23:13:39.753049       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="59.957µs"
	I0805 23:13:39.956312       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="55.499µs"
	I0805 23:13:40.560184       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="42.547403ms"
	I0805 23:13:40.560332       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="69.584µs"
	E0805 23:14:13.364835       1 certificate_controller.go:146] Sync csr-kff4f failed with : error updating signature for csr: Operation cannot be fulfilled on certificatesigningrequests.certificates.k8s.io "csr-kff4f": the object has been modified; please apply your changes to the latest version and try again
	I0805 23:14:13.639773       1 actual_state_of_world.go:543] "Failed to update statusUpdateNeeded field in actual state of world" logger="persistentvolume-attach-detach-controller" err="Failed to set statusUpdateNeeded to needed true, because nodeName=\"ha-044175-m04\" does not exist"
	I0805 23:14:13.667820       1 range_allocator.go:381] "Set node PodCIDR" logger="node-ipam-controller" node="ha-044175-m04" podCIDRs=["10.244.3.0/24"]
	I0805 23:14:16.739704       1 node_lifecycle_controller.go:879] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="ha-044175-m04"
	I0805 23:14:34.661448       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="ha-044175-m04"
	I0805 23:15:34.658711       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="ha-044175-m04"
	I0805 23:15:34.861295       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="18.798335ms"
	I0805 23:15:34.864183       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="2.779398ms"
	
	
	==> kube-proxy [04c382fd4a32fe8685a6f643ecf7a291e4d542c2223975f9df92991fe566b12a] <==
	I0805 23:10:58.215353       1 server_linux.go:69] "Using iptables proxy"
	I0805 23:10:58.317044       1 server.go:1062] "Successfully retrieved node IP(s)" IPs=["192.168.39.57"]
	I0805 23:10:58.380977       1 server_linux.go:143] "No iptables support for family" ipFamily="IPv6"
	I0805 23:10:58.381046       1 server.go:661] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0805 23:10:58.381064       1 server_linux.go:165] "Using iptables Proxier"
	I0805 23:10:58.385444       1 proxier.go:243] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I0805 23:10:58.385706       1 server.go:872] "Version info" version="v1.30.3"
	I0805 23:10:58.385735       1 server.go:874] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0805 23:10:58.388101       1 config.go:192] "Starting service config controller"
	I0805 23:10:58.388578       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0805 23:10:58.388682       1 config.go:101] "Starting endpoint slice config controller"
	I0805 23:10:58.388703       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0805 23:10:58.391001       1 config.go:319] "Starting node config controller"
	I0805 23:10:58.391039       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0805 23:10:58.489499       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I0805 23:10:58.489653       1 shared_informer.go:320] Caches are synced for service config
	I0805 23:10:58.491225       1 shared_informer.go:320] Caches are synced for node config
	
	
	==> kube-scheduler [2a85f2254a23cdec7e89ff8de2e31b06ddf2853808330965760217f1fd834004] <==
	W0805 23:10:42.174988       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E0805 23:10:42.175038       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	W0805 23:10:42.233253       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0805 23:10:42.233303       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	W0805 23:10:42.397575       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0805 23:10:42.397708       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	W0805 23:10:42.441077       1 reflector.go:547] runtime/asm_amd64.s:1695: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0805 23:10:42.441194       1 reflector.go:150] runtime/asm_amd64.s:1695: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	W0805 23:10:42.451687       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E0805 23:10:42.451734       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	I0805 23:10:44.475061       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	E0805 23:13:36.096100       1 framework.go:1286] "Plugin Failed" err="Operation cannot be fulfilled on pods/binding \"busybox-fc5497c4f-wmfql\": pod busybox-fc5497c4f-wmfql is already assigned to node \"ha-044175\"" plugin="DefaultBinder" pod="default/busybox-fc5497c4f-wmfql" node="ha-044175"
	E0805 23:13:36.096217       1 schedule_one.go:338] "scheduler cache ForgetPod failed" err="pod bfc8bad7-d43d-4beb-991e-339a4ce96ab5(default/busybox-fc5497c4f-wmfql) wasn't assumed so cannot be forgotten" pod="default/busybox-fc5497c4f-wmfql"
	E0805 23:13:36.096246       1 schedule_one.go:1046] "Error scheduling pod; retrying" err="running Bind plugin \"DefaultBinder\": Operation cannot be fulfilled on pods/binding \"busybox-fc5497c4f-wmfql\": pod busybox-fc5497c4f-wmfql is already assigned to node \"ha-044175\"" pod="default/busybox-fc5497c4f-wmfql"
	I0805 23:13:36.096326       1 schedule_one.go:1059] "Pod has been assigned to node. Abort adding it back to queue." pod="default/busybox-fc5497c4f-wmfql" node="ha-044175"
	E0805 23:13:36.098987       1 framework.go:1286] "Plugin Failed" err="Operation cannot be fulfilled on pods/binding \"busybox-fc5497c4f-tpqpw\": pod busybox-fc5497c4f-tpqpw is already assigned to node \"ha-044175-m02\"" plugin="DefaultBinder" pod="default/busybox-fc5497c4f-tpqpw" node="ha-044175-m02"
	E0805 23:13:36.101555       1 schedule_one.go:338] "scheduler cache ForgetPod failed" err="pod 0d6e0955-71b4-4790-89ab-452b0750a85d(default/busybox-fc5497c4f-tpqpw) wasn't assumed so cannot be forgotten" pod="default/busybox-fc5497c4f-tpqpw"
	E0805 23:13:36.102338       1 schedule_one.go:1046] "Error scheduling pod; retrying" err="running Bind plugin \"DefaultBinder\": Operation cannot be fulfilled on pods/binding \"busybox-fc5497c4f-tpqpw\": pod busybox-fc5497c4f-tpqpw is already assigned to node \"ha-044175-m02\"" pod="default/busybox-fc5497c4f-tpqpw"
	I0805 23:13:36.102510       1 schedule_one.go:1059] "Pod has been assigned to node. Abort adding it back to queue." pod="default/busybox-fc5497c4f-tpqpw" node="ha-044175-m02"
	E0805 23:14:13.759910       1 framework.go:1286] "Plugin Failed" err="Operation cannot be fulfilled on pods/binding \"kindnet-s9t2d\": pod kindnet-s9t2d is already assigned to node \"ha-044175-m04\"" plugin="DefaultBinder" pod="kube-system/kindnet-s9t2d" node="ha-044175-m04"
	E0805 23:14:13.760048       1 schedule_one.go:338] "scheduler cache ForgetPod failed" err="pod 59dff32f-9b2c-4cdd-b706-fabcab7bdc67(kube-system/kindnet-s9t2d) wasn't assumed so cannot be forgotten" pod="kube-system/kindnet-s9t2d"
	E0805 23:14:13.760073       1 schedule_one.go:1046] "Error scheduling pod; retrying" err="running Bind plugin \"DefaultBinder\": Operation cannot be fulfilled on pods/binding \"kindnet-s9t2d\": pod kindnet-s9t2d is already assigned to node \"ha-044175-m04\"" pod="kube-system/kindnet-s9t2d"
	I0805 23:14:13.760122       1 schedule_one.go:1059] "Pod has been assigned to node. Abort adding it back to queue." pod="kube-system/kindnet-s9t2d" node="ha-044175-m04"
	E0805 23:14:15.570618       1 framework.go:1286] "Plugin Failed" err="Operation cannot be fulfilled on pods/binding \"kindnet-s6qcf\": pod kindnet-s6qcf is already assigned to node \"ha-044175-m04\"" plugin="DefaultBinder" pod="kube-system/kindnet-s6qcf" node="ha-044175-m04"
	E0805 23:14:15.570740       1 schedule_one.go:1046] "Error scheduling pod; retrying" err="running Bind plugin \"DefaultBinder\": Operation cannot be fulfilled on pods/binding \"kindnet-s6qcf\": pod kindnet-s6qcf is already assigned to node \"ha-044175-m04\"" pod="kube-system/kindnet-s6qcf"
	
	
	==> kubelet <==
	Aug 05 23:12:44 ha-044175 kubelet[1390]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Aug 05 23:12:44 ha-044175 kubelet[1390]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Aug 05 23:13:36 ha-044175 kubelet[1390]: I0805 23:13:36.067090    1390 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-7db6d8ff4d-g9bml" podStartSLOduration=159.067002213 podStartE2EDuration="2m39.067002213s" podCreationTimestamp="2024-08-05 23:10:57 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-08-05 23:11:14.934556526 +0000 UTC m=+30.436928970" watchObservedRunningTime="2024-08-05 23:13:36.067002213 +0000 UTC m=+171.569374712"
	Aug 05 23:13:36 ha-044175 kubelet[1390]: I0805 23:13:36.068658    1390 topology_manager.go:215] "Topology Admit Handler" podUID="bfc8bad7-d43d-4beb-991e-339a4ce96ab5" podNamespace="default" podName="busybox-fc5497c4f-wmfql"
	Aug 05 23:13:36 ha-044175 kubelet[1390]: I0805 23:13:36.129155    1390 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-stpx5\" (UniqueName: \"kubernetes.io/projected/bfc8bad7-d43d-4beb-991e-339a4ce96ab5-kube-api-access-stpx5\") pod \"busybox-fc5497c4f-wmfql\" (UID: \"bfc8bad7-d43d-4beb-991e-339a4ce96ab5\") " pod="default/busybox-fc5497c4f-wmfql"
	Aug 05 23:13:44 ha-044175 kubelet[1390]: E0805 23:13:44.738483    1390 iptables.go:577] "Could not set up iptables canary" err=<
	Aug 05 23:13:44 ha-044175 kubelet[1390]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Aug 05 23:13:44 ha-044175 kubelet[1390]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Aug 05 23:13:44 ha-044175 kubelet[1390]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Aug 05 23:13:44 ha-044175 kubelet[1390]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Aug 05 23:14:44 ha-044175 kubelet[1390]: E0805 23:14:44.733769    1390 iptables.go:577] "Could not set up iptables canary" err=<
	Aug 05 23:14:44 ha-044175 kubelet[1390]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Aug 05 23:14:44 ha-044175 kubelet[1390]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Aug 05 23:14:44 ha-044175 kubelet[1390]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Aug 05 23:14:44 ha-044175 kubelet[1390]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Aug 05 23:15:44 ha-044175 kubelet[1390]: E0805 23:15:44.735270    1390 iptables.go:577] "Could not set up iptables canary" err=<
	Aug 05 23:15:44 ha-044175 kubelet[1390]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Aug 05 23:15:44 ha-044175 kubelet[1390]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Aug 05 23:15:44 ha-044175 kubelet[1390]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Aug 05 23:15:44 ha-044175 kubelet[1390]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Aug 05 23:16:44 ha-044175 kubelet[1390]: E0805 23:16:44.737234    1390 iptables.go:577] "Could not set up iptables canary" err=<
	Aug 05 23:16:44 ha-044175 kubelet[1390]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Aug 05 23:16:44 ha-044175 kubelet[1390]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Aug 05 23:16:44 ha-044175 kubelet[1390]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Aug 05 23:16:44 ha-044175 kubelet[1390]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p ha-044175 -n ha-044175
helpers_test.go:261: (dbg) Run:  kubectl --context ha-044175 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestMultiControlPlane/serial/StopSecondaryNode FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
--- FAIL: TestMultiControlPlane/serial/StopSecondaryNode (141.99s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartSecondaryNode (48.27s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartSecondaryNode
ha_test.go:420: (dbg) Run:  out/minikube-linux-amd64 -p ha-044175 node start m02 -v=7 --alsologtostderr
ha_test.go:428: (dbg) Run:  out/minikube-linux-amd64 -p ha-044175 status -v=7 --alsologtostderr
ha_test.go:428: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ha-044175 status -v=7 --alsologtostderr: exit status 3 (3.206767068s)

                                                
                                                
-- stdout --
	ha-044175
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-044175-m02
	type: Control Plane
	host: Error
	kubelet: Nonexistent
	apiserver: Nonexistent
	kubeconfig: Configured
	
	ha-044175-m03
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-044175-m04
	type: Worker
	host: Running
	kubelet: Running
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0805 23:17:20.665530   34179 out.go:291] Setting OutFile to fd 1 ...
	I0805 23:17:20.665630   34179 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0805 23:17:20.665637   34179 out.go:304] Setting ErrFile to fd 2...
	I0805 23:17:20.665642   34179 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0805 23:17:20.665847   34179 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19373-9606/.minikube/bin
	I0805 23:17:20.666043   34179 out.go:298] Setting JSON to false
	I0805 23:17:20.666071   34179 mustload.go:65] Loading cluster: ha-044175
	I0805 23:17:20.666175   34179 notify.go:220] Checking for updates...
	I0805 23:17:20.666579   34179 config.go:182] Loaded profile config "ha-044175": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0805 23:17:20.666649   34179 status.go:255] checking status of ha-044175 ...
	I0805 23:17:20.667396   34179 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0805 23:17:20.667478   34179 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0805 23:17:20.685419   34179 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33397
	I0805 23:17:20.686139   34179 main.go:141] libmachine: () Calling .GetVersion
	I0805 23:17:20.686792   34179 main.go:141] libmachine: Using API Version  1
	I0805 23:17:20.686824   34179 main.go:141] libmachine: () Calling .SetConfigRaw
	I0805 23:17:20.687282   34179 main.go:141] libmachine: () Calling .GetMachineName
	I0805 23:17:20.687491   34179 main.go:141] libmachine: (ha-044175) Calling .GetState
	I0805 23:17:20.689207   34179 status.go:330] ha-044175 host status = "Running" (err=<nil>)
	I0805 23:17:20.689232   34179 host.go:66] Checking if "ha-044175" exists ...
	I0805 23:17:20.689514   34179 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0805 23:17:20.689553   34179 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0805 23:17:20.705755   34179 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34509
	I0805 23:17:20.706286   34179 main.go:141] libmachine: () Calling .GetVersion
	I0805 23:17:20.706753   34179 main.go:141] libmachine: Using API Version  1
	I0805 23:17:20.706773   34179 main.go:141] libmachine: () Calling .SetConfigRaw
	I0805 23:17:20.707149   34179 main.go:141] libmachine: () Calling .GetMachineName
	I0805 23:17:20.707372   34179 main.go:141] libmachine: (ha-044175) Calling .GetIP
	I0805 23:17:20.710560   34179 main.go:141] libmachine: (ha-044175) DBG | domain ha-044175 has defined MAC address 52:54:00:d0:5f:e4 in network mk-ha-044175
	I0805 23:17:20.710993   34179 main.go:141] libmachine: (ha-044175) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d0:5f:e4", ip: ""} in network mk-ha-044175: {Iface:virbr1 ExpiryTime:2024-08-06 00:10:15 +0000 UTC Type:0 Mac:52:54:00:d0:5f:e4 Iaid: IPaddr:192.168.39.57 Prefix:24 Hostname:ha-044175 Clientid:01:52:54:00:d0:5f:e4}
	I0805 23:17:20.711044   34179 main.go:141] libmachine: (ha-044175) DBG | domain ha-044175 has defined IP address 192.168.39.57 and MAC address 52:54:00:d0:5f:e4 in network mk-ha-044175
	I0805 23:17:20.711209   34179 host.go:66] Checking if "ha-044175" exists ...
	I0805 23:17:20.711585   34179 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0805 23:17:20.711651   34179 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0805 23:17:20.726874   34179 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40931
	I0805 23:17:20.727293   34179 main.go:141] libmachine: () Calling .GetVersion
	I0805 23:17:20.727778   34179 main.go:141] libmachine: Using API Version  1
	I0805 23:17:20.727799   34179 main.go:141] libmachine: () Calling .SetConfigRaw
	I0805 23:17:20.728148   34179 main.go:141] libmachine: () Calling .GetMachineName
	I0805 23:17:20.728342   34179 main.go:141] libmachine: (ha-044175) Calling .DriverName
	I0805 23:17:20.728530   34179 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0805 23:17:20.728559   34179 main.go:141] libmachine: (ha-044175) Calling .GetSSHHostname
	I0805 23:17:20.731998   34179 main.go:141] libmachine: (ha-044175) DBG | domain ha-044175 has defined MAC address 52:54:00:d0:5f:e4 in network mk-ha-044175
	I0805 23:17:20.732426   34179 main.go:141] libmachine: (ha-044175) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d0:5f:e4", ip: ""} in network mk-ha-044175: {Iface:virbr1 ExpiryTime:2024-08-06 00:10:15 +0000 UTC Type:0 Mac:52:54:00:d0:5f:e4 Iaid: IPaddr:192.168.39.57 Prefix:24 Hostname:ha-044175 Clientid:01:52:54:00:d0:5f:e4}
	I0805 23:17:20.732467   34179 main.go:141] libmachine: (ha-044175) DBG | domain ha-044175 has defined IP address 192.168.39.57 and MAC address 52:54:00:d0:5f:e4 in network mk-ha-044175
	I0805 23:17:20.732648   34179 main.go:141] libmachine: (ha-044175) Calling .GetSSHPort
	I0805 23:17:20.732830   34179 main.go:141] libmachine: (ha-044175) Calling .GetSSHKeyPath
	I0805 23:17:20.733038   34179 main.go:141] libmachine: (ha-044175) Calling .GetSSHUsername
	I0805 23:17:20.733191   34179 sshutil.go:53] new ssh client: &{IP:192.168.39.57 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19373-9606/.minikube/machines/ha-044175/id_rsa Username:docker}
	I0805 23:17:20.819133   34179 ssh_runner.go:195] Run: systemctl --version
	I0805 23:17:20.825943   34179 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0805 23:17:20.842437   34179 kubeconfig.go:125] found "ha-044175" server: "https://192.168.39.254:8443"
	I0805 23:17:20.842467   34179 api_server.go:166] Checking apiserver status ...
	I0805 23:17:20.842508   34179 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0805 23:17:20.858262   34179 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1201/cgroup
	W0805 23:17:20.869017   34179 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1201/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0805 23:17:20.869066   34179 ssh_runner.go:195] Run: ls
	I0805 23:17:20.874481   34179 api_server.go:253] Checking apiserver healthz at https://192.168.39.254:8443/healthz ...
	I0805 23:17:20.878683   34179 api_server.go:279] https://192.168.39.254:8443/healthz returned 200:
	ok
	I0805 23:17:20.878704   34179 status.go:422] ha-044175 apiserver status = Running (err=<nil>)
	I0805 23:17:20.878712   34179 status.go:257] ha-044175 status: &{Name:ha-044175 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0805 23:17:20.878727   34179 status.go:255] checking status of ha-044175-m02 ...
	I0805 23:17:20.879094   34179 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0805 23:17:20.879132   34179 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0805 23:17:20.894973   34179 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46007
	I0805 23:17:20.895417   34179 main.go:141] libmachine: () Calling .GetVersion
	I0805 23:17:20.895924   34179 main.go:141] libmachine: Using API Version  1
	I0805 23:17:20.895942   34179 main.go:141] libmachine: () Calling .SetConfigRaw
	I0805 23:17:20.896307   34179 main.go:141] libmachine: () Calling .GetMachineName
	I0805 23:17:20.896505   34179 main.go:141] libmachine: (ha-044175-m02) Calling .GetState
	I0805 23:17:20.898147   34179 status.go:330] ha-044175-m02 host status = "Running" (err=<nil>)
	I0805 23:17:20.898166   34179 host.go:66] Checking if "ha-044175-m02" exists ...
	I0805 23:17:20.898516   34179 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0805 23:17:20.898556   34179 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0805 23:17:20.913241   34179 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38509
	I0805 23:17:20.913638   34179 main.go:141] libmachine: () Calling .GetVersion
	I0805 23:17:20.914025   34179 main.go:141] libmachine: Using API Version  1
	I0805 23:17:20.914053   34179 main.go:141] libmachine: () Calling .SetConfigRaw
	I0805 23:17:20.914335   34179 main.go:141] libmachine: () Calling .GetMachineName
	I0805 23:17:20.914476   34179 main.go:141] libmachine: (ha-044175-m02) Calling .GetIP
	I0805 23:17:20.916925   34179 main.go:141] libmachine: (ha-044175-m02) DBG | domain ha-044175-m02 has defined MAC address 52:54:00:84:bb:47 in network mk-ha-044175
	I0805 23:17:20.917291   34179 main.go:141] libmachine: (ha-044175-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:84:bb:47", ip: ""} in network mk-ha-044175: {Iface:virbr1 ExpiryTime:2024-08-06 00:11:13 +0000 UTC Type:0 Mac:52:54:00:84:bb:47 Iaid: IPaddr:192.168.39.112 Prefix:24 Hostname:ha-044175-m02 Clientid:01:52:54:00:84:bb:47}
	I0805 23:17:20.917315   34179 main.go:141] libmachine: (ha-044175-m02) DBG | domain ha-044175-m02 has defined IP address 192.168.39.112 and MAC address 52:54:00:84:bb:47 in network mk-ha-044175
	I0805 23:17:20.917467   34179 host.go:66] Checking if "ha-044175-m02" exists ...
	I0805 23:17:20.917782   34179 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0805 23:17:20.917815   34179 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0805 23:17:20.932310   34179 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46805
	I0805 23:17:20.932703   34179 main.go:141] libmachine: () Calling .GetVersion
	I0805 23:17:20.933185   34179 main.go:141] libmachine: Using API Version  1
	I0805 23:17:20.933204   34179 main.go:141] libmachine: () Calling .SetConfigRaw
	I0805 23:17:20.933468   34179 main.go:141] libmachine: () Calling .GetMachineName
	I0805 23:17:20.933642   34179 main.go:141] libmachine: (ha-044175-m02) Calling .DriverName
	I0805 23:17:20.933828   34179 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0805 23:17:20.933845   34179 main.go:141] libmachine: (ha-044175-m02) Calling .GetSSHHostname
	I0805 23:17:20.936738   34179 main.go:141] libmachine: (ha-044175-m02) DBG | domain ha-044175-m02 has defined MAC address 52:54:00:84:bb:47 in network mk-ha-044175
	I0805 23:17:20.937194   34179 main.go:141] libmachine: (ha-044175-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:84:bb:47", ip: ""} in network mk-ha-044175: {Iface:virbr1 ExpiryTime:2024-08-06 00:11:13 +0000 UTC Type:0 Mac:52:54:00:84:bb:47 Iaid: IPaddr:192.168.39.112 Prefix:24 Hostname:ha-044175-m02 Clientid:01:52:54:00:84:bb:47}
	I0805 23:17:20.937215   34179 main.go:141] libmachine: (ha-044175-m02) DBG | domain ha-044175-m02 has defined IP address 192.168.39.112 and MAC address 52:54:00:84:bb:47 in network mk-ha-044175
	I0805 23:17:20.937333   34179 main.go:141] libmachine: (ha-044175-m02) Calling .GetSSHPort
	I0805 23:17:20.937493   34179 main.go:141] libmachine: (ha-044175-m02) Calling .GetSSHKeyPath
	I0805 23:17:20.937608   34179 main.go:141] libmachine: (ha-044175-m02) Calling .GetSSHUsername
	I0805 23:17:20.937706   34179 sshutil.go:53] new ssh client: &{IP:192.168.39.112 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19373-9606/.minikube/machines/ha-044175-m02/id_rsa Username:docker}
	W0805 23:17:23.467327   34179 sshutil.go:64] dial failure (will retry): dial tcp 192.168.39.112:22: connect: no route to host
	W0805 23:17:23.467422   34179 start.go:268] error running df -h /var: NewSession: new client: new client: dial tcp 192.168.39.112:22: connect: no route to host
	E0805 23:17:23.467444   34179 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.39.112:22: connect: no route to host
	I0805 23:17:23.467457   34179 status.go:257] ha-044175-m02 status: &{Name:ha-044175-m02 Host:Error Kubelet:Nonexistent APIServer:Nonexistent Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	E0805 23:17:23.467481   34179 status.go:260] status error: NewSession: new client: new client: dial tcp 192.168.39.112:22: connect: no route to host
	I0805 23:17:23.467491   34179 status.go:255] checking status of ha-044175-m03 ...
	I0805 23:17:23.467937   34179 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0805 23:17:23.467990   34179 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0805 23:17:23.483968   34179 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36131
	I0805 23:17:23.484423   34179 main.go:141] libmachine: () Calling .GetVersion
	I0805 23:17:23.485011   34179 main.go:141] libmachine: Using API Version  1
	I0805 23:17:23.485036   34179 main.go:141] libmachine: () Calling .SetConfigRaw
	I0805 23:17:23.485395   34179 main.go:141] libmachine: () Calling .GetMachineName
	I0805 23:17:23.485574   34179 main.go:141] libmachine: (ha-044175-m03) Calling .GetState
	I0805 23:17:23.487106   34179 status.go:330] ha-044175-m03 host status = "Running" (err=<nil>)
	I0805 23:17:23.487119   34179 host.go:66] Checking if "ha-044175-m03" exists ...
	I0805 23:17:23.487463   34179 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0805 23:17:23.487505   34179 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0805 23:17:23.502072   34179 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45271
	I0805 23:17:23.502414   34179 main.go:141] libmachine: () Calling .GetVersion
	I0805 23:17:23.502823   34179 main.go:141] libmachine: Using API Version  1
	I0805 23:17:23.502846   34179 main.go:141] libmachine: () Calling .SetConfigRaw
	I0805 23:17:23.503206   34179 main.go:141] libmachine: () Calling .GetMachineName
	I0805 23:17:23.503396   34179 main.go:141] libmachine: (ha-044175-m03) Calling .GetIP
	I0805 23:17:23.506301   34179 main.go:141] libmachine: (ha-044175-m03) DBG | domain ha-044175-m03 has defined MAC address 52:54:00:f4:37:04 in network mk-ha-044175
	I0805 23:17:23.506757   34179 main.go:141] libmachine: (ha-044175-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f4:37:04", ip: ""} in network mk-ha-044175: {Iface:virbr1 ExpiryTime:2024-08-06 00:12:31 +0000 UTC Type:0 Mac:52:54:00:f4:37:04 Iaid: IPaddr:192.168.39.201 Prefix:24 Hostname:ha-044175-m03 Clientid:01:52:54:00:f4:37:04}
	I0805 23:17:23.506794   34179 main.go:141] libmachine: (ha-044175-m03) DBG | domain ha-044175-m03 has defined IP address 192.168.39.201 and MAC address 52:54:00:f4:37:04 in network mk-ha-044175
	I0805 23:17:23.506938   34179 host.go:66] Checking if "ha-044175-m03" exists ...
	I0805 23:17:23.507268   34179 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0805 23:17:23.507302   34179 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0805 23:17:23.521489   34179 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45755
	I0805 23:17:23.521877   34179 main.go:141] libmachine: () Calling .GetVersion
	I0805 23:17:23.522330   34179 main.go:141] libmachine: Using API Version  1
	I0805 23:17:23.522349   34179 main.go:141] libmachine: () Calling .SetConfigRaw
	I0805 23:17:23.522639   34179 main.go:141] libmachine: () Calling .GetMachineName
	I0805 23:17:23.522846   34179 main.go:141] libmachine: (ha-044175-m03) Calling .DriverName
	I0805 23:17:23.523035   34179 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0805 23:17:23.523077   34179 main.go:141] libmachine: (ha-044175-m03) Calling .GetSSHHostname
	I0805 23:17:23.525797   34179 main.go:141] libmachine: (ha-044175-m03) DBG | domain ha-044175-m03 has defined MAC address 52:54:00:f4:37:04 in network mk-ha-044175
	I0805 23:17:23.526276   34179 main.go:141] libmachine: (ha-044175-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f4:37:04", ip: ""} in network mk-ha-044175: {Iface:virbr1 ExpiryTime:2024-08-06 00:12:31 +0000 UTC Type:0 Mac:52:54:00:f4:37:04 Iaid: IPaddr:192.168.39.201 Prefix:24 Hostname:ha-044175-m03 Clientid:01:52:54:00:f4:37:04}
	I0805 23:17:23.526307   34179 main.go:141] libmachine: (ha-044175-m03) DBG | domain ha-044175-m03 has defined IP address 192.168.39.201 and MAC address 52:54:00:f4:37:04 in network mk-ha-044175
	I0805 23:17:23.526484   34179 main.go:141] libmachine: (ha-044175-m03) Calling .GetSSHPort
	I0805 23:17:23.526653   34179 main.go:141] libmachine: (ha-044175-m03) Calling .GetSSHKeyPath
	I0805 23:17:23.526761   34179 main.go:141] libmachine: (ha-044175-m03) Calling .GetSSHUsername
	I0805 23:17:23.526894   34179 sshutil.go:53] new ssh client: &{IP:192.168.39.201 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19373-9606/.minikube/machines/ha-044175-m03/id_rsa Username:docker}
	I0805 23:17:23.615278   34179 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0805 23:17:23.630851   34179 kubeconfig.go:125] found "ha-044175" server: "https://192.168.39.254:8443"
	I0805 23:17:23.630876   34179 api_server.go:166] Checking apiserver status ...
	I0805 23:17:23.630904   34179 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0805 23:17:23.646284   34179 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1566/cgroup
	W0805 23:17:23.657452   34179 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1566/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0805 23:17:23.657500   34179 ssh_runner.go:195] Run: ls
	I0805 23:17:23.668591   34179 api_server.go:253] Checking apiserver healthz at https://192.168.39.254:8443/healthz ...
	I0805 23:17:23.673077   34179 api_server.go:279] https://192.168.39.254:8443/healthz returned 200:
	ok
	I0805 23:17:23.673109   34179 status.go:422] ha-044175-m03 apiserver status = Running (err=<nil>)
	I0805 23:17:23.673121   34179 status.go:257] ha-044175-m03 status: &{Name:ha-044175-m03 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0805 23:17:23.673142   34179 status.go:255] checking status of ha-044175-m04 ...
	I0805 23:17:23.673558   34179 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0805 23:17:23.673598   34179 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0805 23:17:23.688790   34179 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34541
	I0805 23:17:23.689356   34179 main.go:141] libmachine: () Calling .GetVersion
	I0805 23:17:23.689981   34179 main.go:141] libmachine: Using API Version  1
	I0805 23:17:23.690012   34179 main.go:141] libmachine: () Calling .SetConfigRaw
	I0805 23:17:23.690382   34179 main.go:141] libmachine: () Calling .GetMachineName
	I0805 23:17:23.690604   34179 main.go:141] libmachine: (ha-044175-m04) Calling .GetState
	I0805 23:17:23.692366   34179 status.go:330] ha-044175-m04 host status = "Running" (err=<nil>)
	I0805 23:17:23.692381   34179 host.go:66] Checking if "ha-044175-m04" exists ...
	I0805 23:17:23.692667   34179 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0805 23:17:23.692701   34179 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0805 23:17:23.707443   34179 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36283
	I0805 23:17:23.707835   34179 main.go:141] libmachine: () Calling .GetVersion
	I0805 23:17:23.708266   34179 main.go:141] libmachine: Using API Version  1
	I0805 23:17:23.708289   34179 main.go:141] libmachine: () Calling .SetConfigRaw
	I0805 23:17:23.708640   34179 main.go:141] libmachine: () Calling .GetMachineName
	I0805 23:17:23.708806   34179 main.go:141] libmachine: (ha-044175-m04) Calling .GetIP
	I0805 23:17:23.712165   34179 main.go:141] libmachine: (ha-044175-m04) DBG | domain ha-044175-m04 has defined MAC address 52:54:00:e5:ba:4d in network mk-ha-044175
	I0805 23:17:23.712788   34179 main.go:141] libmachine: (ha-044175-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e5:ba:4d", ip: ""} in network mk-ha-044175: {Iface:virbr1 ExpiryTime:2024-08-06 00:13:59 +0000 UTC Type:0 Mac:52:54:00:e5:ba:4d Iaid: IPaddr:192.168.39.228 Prefix:24 Hostname:ha-044175-m04 Clientid:01:52:54:00:e5:ba:4d}
	I0805 23:17:23.712823   34179 main.go:141] libmachine: (ha-044175-m04) DBG | domain ha-044175-m04 has defined IP address 192.168.39.228 and MAC address 52:54:00:e5:ba:4d in network mk-ha-044175
	I0805 23:17:23.713036   34179 host.go:66] Checking if "ha-044175-m04" exists ...
	I0805 23:17:23.713326   34179 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0805 23:17:23.713361   34179 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0805 23:17:23.727753   34179 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44179
	I0805 23:17:23.728168   34179 main.go:141] libmachine: () Calling .GetVersion
	I0805 23:17:23.728605   34179 main.go:141] libmachine: Using API Version  1
	I0805 23:17:23.728627   34179 main.go:141] libmachine: () Calling .SetConfigRaw
	I0805 23:17:23.728932   34179 main.go:141] libmachine: () Calling .GetMachineName
	I0805 23:17:23.729107   34179 main.go:141] libmachine: (ha-044175-m04) Calling .DriverName
	I0805 23:17:23.729270   34179 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0805 23:17:23.729291   34179 main.go:141] libmachine: (ha-044175-m04) Calling .GetSSHHostname
	I0805 23:17:23.732140   34179 main.go:141] libmachine: (ha-044175-m04) DBG | domain ha-044175-m04 has defined MAC address 52:54:00:e5:ba:4d in network mk-ha-044175
	I0805 23:17:23.732476   34179 main.go:141] libmachine: (ha-044175-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e5:ba:4d", ip: ""} in network mk-ha-044175: {Iface:virbr1 ExpiryTime:2024-08-06 00:13:59 +0000 UTC Type:0 Mac:52:54:00:e5:ba:4d Iaid: IPaddr:192.168.39.228 Prefix:24 Hostname:ha-044175-m04 Clientid:01:52:54:00:e5:ba:4d}
	I0805 23:17:23.732490   34179 main.go:141] libmachine: (ha-044175-m04) DBG | domain ha-044175-m04 has defined IP address 192.168.39.228 and MAC address 52:54:00:e5:ba:4d in network mk-ha-044175
	I0805 23:17:23.732645   34179 main.go:141] libmachine: (ha-044175-m04) Calling .GetSSHPort
	I0805 23:17:23.732800   34179 main.go:141] libmachine: (ha-044175-m04) Calling .GetSSHKeyPath
	I0805 23:17:23.732970   34179 main.go:141] libmachine: (ha-044175-m04) Calling .GetSSHUsername
	I0805 23:17:23.733123   34179 sshutil.go:53] new ssh client: &{IP:192.168.39.228 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19373-9606/.minikube/machines/ha-044175-m04/id_rsa Username:docker}
	I0805 23:17:23.814969   34179 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0805 23:17:23.829346   34179 status.go:257] ha-044175-m04 status: &{Name:ha-044175-m04 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
ha_test.go:428: (dbg) Run:  out/minikube-linux-amd64 -p ha-044175 status -v=7 --alsologtostderr
ha_test.go:428: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ha-044175 status -v=7 --alsologtostderr: exit status 3 (5.292093655s)

                                                
                                                
-- stdout --
	ha-044175
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-044175-m02
	type: Control Plane
	host: Error
	kubelet: Nonexistent
	apiserver: Nonexistent
	kubeconfig: Configured
	
	ha-044175-m03
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-044175-m04
	type: Worker
	host: Running
	kubelet: Running
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0805 23:17:24.719567   34279 out.go:291] Setting OutFile to fd 1 ...
	I0805 23:17:24.719813   34279 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0805 23:17:24.719823   34279 out.go:304] Setting ErrFile to fd 2...
	I0805 23:17:24.719829   34279 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0805 23:17:24.720019   34279 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19373-9606/.minikube/bin
	I0805 23:17:24.720189   34279 out.go:298] Setting JSON to false
	I0805 23:17:24.720214   34279 mustload.go:65] Loading cluster: ha-044175
	I0805 23:17:24.720302   34279 notify.go:220] Checking for updates...
	I0805 23:17:24.720694   34279 config.go:182] Loaded profile config "ha-044175": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0805 23:17:24.720715   34279 status.go:255] checking status of ha-044175 ...
	I0805 23:17:24.721223   34279 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0805 23:17:24.721268   34279 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0805 23:17:24.740088   34279 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35599
	I0805 23:17:24.740460   34279 main.go:141] libmachine: () Calling .GetVersion
	I0805 23:17:24.740959   34279 main.go:141] libmachine: Using API Version  1
	I0805 23:17:24.740981   34279 main.go:141] libmachine: () Calling .SetConfigRaw
	I0805 23:17:24.741266   34279 main.go:141] libmachine: () Calling .GetMachineName
	I0805 23:17:24.741462   34279 main.go:141] libmachine: (ha-044175) Calling .GetState
	I0805 23:17:24.743182   34279 status.go:330] ha-044175 host status = "Running" (err=<nil>)
	I0805 23:17:24.743212   34279 host.go:66] Checking if "ha-044175" exists ...
	I0805 23:17:24.743570   34279 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0805 23:17:24.743609   34279 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0805 23:17:24.758543   34279 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43777
	I0805 23:17:24.758934   34279 main.go:141] libmachine: () Calling .GetVersion
	I0805 23:17:24.759412   34279 main.go:141] libmachine: Using API Version  1
	I0805 23:17:24.759447   34279 main.go:141] libmachine: () Calling .SetConfigRaw
	I0805 23:17:24.759796   34279 main.go:141] libmachine: () Calling .GetMachineName
	I0805 23:17:24.759983   34279 main.go:141] libmachine: (ha-044175) Calling .GetIP
	I0805 23:17:24.762833   34279 main.go:141] libmachine: (ha-044175) DBG | domain ha-044175 has defined MAC address 52:54:00:d0:5f:e4 in network mk-ha-044175
	I0805 23:17:24.763307   34279 main.go:141] libmachine: (ha-044175) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d0:5f:e4", ip: ""} in network mk-ha-044175: {Iface:virbr1 ExpiryTime:2024-08-06 00:10:15 +0000 UTC Type:0 Mac:52:54:00:d0:5f:e4 Iaid: IPaddr:192.168.39.57 Prefix:24 Hostname:ha-044175 Clientid:01:52:54:00:d0:5f:e4}
	I0805 23:17:24.763328   34279 main.go:141] libmachine: (ha-044175) DBG | domain ha-044175 has defined IP address 192.168.39.57 and MAC address 52:54:00:d0:5f:e4 in network mk-ha-044175
	I0805 23:17:24.763501   34279 host.go:66] Checking if "ha-044175" exists ...
	I0805 23:17:24.763777   34279 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0805 23:17:24.763815   34279 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0805 23:17:24.779515   34279 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38277
	I0805 23:17:24.779900   34279 main.go:141] libmachine: () Calling .GetVersion
	I0805 23:17:24.780371   34279 main.go:141] libmachine: Using API Version  1
	I0805 23:17:24.780392   34279 main.go:141] libmachine: () Calling .SetConfigRaw
	I0805 23:17:24.780660   34279 main.go:141] libmachine: () Calling .GetMachineName
	I0805 23:17:24.780902   34279 main.go:141] libmachine: (ha-044175) Calling .DriverName
	I0805 23:17:24.781075   34279 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0805 23:17:24.781092   34279 main.go:141] libmachine: (ha-044175) Calling .GetSSHHostname
	I0805 23:17:24.784446   34279 main.go:141] libmachine: (ha-044175) DBG | domain ha-044175 has defined MAC address 52:54:00:d0:5f:e4 in network mk-ha-044175
	I0805 23:17:24.784986   34279 main.go:141] libmachine: (ha-044175) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d0:5f:e4", ip: ""} in network mk-ha-044175: {Iface:virbr1 ExpiryTime:2024-08-06 00:10:15 +0000 UTC Type:0 Mac:52:54:00:d0:5f:e4 Iaid: IPaddr:192.168.39.57 Prefix:24 Hostname:ha-044175 Clientid:01:52:54:00:d0:5f:e4}
	I0805 23:17:24.785012   34279 main.go:141] libmachine: (ha-044175) DBG | domain ha-044175 has defined IP address 192.168.39.57 and MAC address 52:54:00:d0:5f:e4 in network mk-ha-044175
	I0805 23:17:24.785197   34279 main.go:141] libmachine: (ha-044175) Calling .GetSSHPort
	I0805 23:17:24.785375   34279 main.go:141] libmachine: (ha-044175) Calling .GetSSHKeyPath
	I0805 23:17:24.785536   34279 main.go:141] libmachine: (ha-044175) Calling .GetSSHUsername
	I0805 23:17:24.785809   34279 sshutil.go:53] new ssh client: &{IP:192.168.39.57 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19373-9606/.minikube/machines/ha-044175/id_rsa Username:docker}
	I0805 23:17:24.866841   34279 ssh_runner.go:195] Run: systemctl --version
	I0805 23:17:24.873311   34279 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0805 23:17:24.893817   34279 kubeconfig.go:125] found "ha-044175" server: "https://192.168.39.254:8443"
	I0805 23:17:24.893843   34279 api_server.go:166] Checking apiserver status ...
	I0805 23:17:24.893872   34279 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0805 23:17:24.911002   34279 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1201/cgroup
	W0805 23:17:24.921724   34279 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1201/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0805 23:17:24.921776   34279 ssh_runner.go:195] Run: ls
	I0805 23:17:24.927284   34279 api_server.go:253] Checking apiserver healthz at https://192.168.39.254:8443/healthz ...
	I0805 23:17:24.931420   34279 api_server.go:279] https://192.168.39.254:8443/healthz returned 200:
	ok
	I0805 23:17:24.931448   34279 status.go:422] ha-044175 apiserver status = Running (err=<nil>)
	I0805 23:17:24.931458   34279 status.go:257] ha-044175 status: &{Name:ha-044175 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0805 23:17:24.931475   34279 status.go:255] checking status of ha-044175-m02 ...
	I0805 23:17:24.931780   34279 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0805 23:17:24.931836   34279 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0805 23:17:24.946572   34279 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33659
	I0805 23:17:24.946964   34279 main.go:141] libmachine: () Calling .GetVersion
	I0805 23:17:24.947427   34279 main.go:141] libmachine: Using API Version  1
	I0805 23:17:24.947446   34279 main.go:141] libmachine: () Calling .SetConfigRaw
	I0805 23:17:24.947732   34279 main.go:141] libmachine: () Calling .GetMachineName
	I0805 23:17:24.947919   34279 main.go:141] libmachine: (ha-044175-m02) Calling .GetState
	I0805 23:17:24.949351   34279 status.go:330] ha-044175-m02 host status = "Running" (err=<nil>)
	I0805 23:17:24.949366   34279 host.go:66] Checking if "ha-044175-m02" exists ...
	I0805 23:17:24.949633   34279 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0805 23:17:24.949664   34279 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0805 23:17:24.963763   34279 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38485
	I0805 23:17:24.964118   34279 main.go:141] libmachine: () Calling .GetVersion
	I0805 23:17:24.964584   34279 main.go:141] libmachine: Using API Version  1
	I0805 23:17:24.964603   34279 main.go:141] libmachine: () Calling .SetConfigRaw
	I0805 23:17:24.964914   34279 main.go:141] libmachine: () Calling .GetMachineName
	I0805 23:17:24.965077   34279 main.go:141] libmachine: (ha-044175-m02) Calling .GetIP
	I0805 23:17:24.967718   34279 main.go:141] libmachine: (ha-044175-m02) DBG | domain ha-044175-m02 has defined MAC address 52:54:00:84:bb:47 in network mk-ha-044175
	I0805 23:17:24.968095   34279 main.go:141] libmachine: (ha-044175-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:84:bb:47", ip: ""} in network mk-ha-044175: {Iface:virbr1 ExpiryTime:2024-08-06 00:11:13 +0000 UTC Type:0 Mac:52:54:00:84:bb:47 Iaid: IPaddr:192.168.39.112 Prefix:24 Hostname:ha-044175-m02 Clientid:01:52:54:00:84:bb:47}
	I0805 23:17:24.968119   34279 main.go:141] libmachine: (ha-044175-m02) DBG | domain ha-044175-m02 has defined IP address 192.168.39.112 and MAC address 52:54:00:84:bb:47 in network mk-ha-044175
	I0805 23:17:24.968234   34279 host.go:66] Checking if "ha-044175-m02" exists ...
	I0805 23:17:24.968508   34279 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0805 23:17:24.968548   34279 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0805 23:17:24.982390   34279 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42743
	I0805 23:17:24.982848   34279 main.go:141] libmachine: () Calling .GetVersion
	I0805 23:17:24.983356   34279 main.go:141] libmachine: Using API Version  1
	I0805 23:17:24.983377   34279 main.go:141] libmachine: () Calling .SetConfigRaw
	I0805 23:17:24.983688   34279 main.go:141] libmachine: () Calling .GetMachineName
	I0805 23:17:24.983880   34279 main.go:141] libmachine: (ha-044175-m02) Calling .DriverName
	I0805 23:17:24.984055   34279 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0805 23:17:24.984072   34279 main.go:141] libmachine: (ha-044175-m02) Calling .GetSSHHostname
	I0805 23:17:24.986557   34279 main.go:141] libmachine: (ha-044175-m02) DBG | domain ha-044175-m02 has defined MAC address 52:54:00:84:bb:47 in network mk-ha-044175
	I0805 23:17:24.987034   34279 main.go:141] libmachine: (ha-044175-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:84:bb:47", ip: ""} in network mk-ha-044175: {Iface:virbr1 ExpiryTime:2024-08-06 00:11:13 +0000 UTC Type:0 Mac:52:54:00:84:bb:47 Iaid: IPaddr:192.168.39.112 Prefix:24 Hostname:ha-044175-m02 Clientid:01:52:54:00:84:bb:47}
	I0805 23:17:24.987072   34279 main.go:141] libmachine: (ha-044175-m02) DBG | domain ha-044175-m02 has defined IP address 192.168.39.112 and MAC address 52:54:00:84:bb:47 in network mk-ha-044175
	I0805 23:17:24.987253   34279 main.go:141] libmachine: (ha-044175-m02) Calling .GetSSHPort
	I0805 23:17:24.987442   34279 main.go:141] libmachine: (ha-044175-m02) Calling .GetSSHKeyPath
	I0805 23:17:24.987587   34279 main.go:141] libmachine: (ha-044175-m02) Calling .GetSSHUsername
	I0805 23:17:24.987722   34279 sshutil.go:53] new ssh client: &{IP:192.168.39.112 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19373-9606/.minikube/machines/ha-044175-m02/id_rsa Username:docker}
	W0805 23:17:26.535335   34279 sshutil.go:64] dial failure (will retry): dial tcp 192.168.39.112:22: connect: no route to host
	I0805 23:17:26.535384   34279 retry.go:31] will retry after 337.977109ms: dial tcp 192.168.39.112:22: connect: no route to host
	W0805 23:17:29.607292   34279 sshutil.go:64] dial failure (will retry): dial tcp 192.168.39.112:22: connect: no route to host
	W0805 23:17:29.607385   34279 start.go:268] error running df -h /var: NewSession: new client: new client: dial tcp 192.168.39.112:22: connect: no route to host
	E0805 23:17:29.607422   34279 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.39.112:22: connect: no route to host
	I0805 23:17:29.607432   34279 status.go:257] ha-044175-m02 status: &{Name:ha-044175-m02 Host:Error Kubelet:Nonexistent APIServer:Nonexistent Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	E0805 23:17:29.607459   34279 status.go:260] status error: NewSession: new client: new client: dial tcp 192.168.39.112:22: connect: no route to host
	I0805 23:17:29.607468   34279 status.go:255] checking status of ha-044175-m03 ...
	I0805 23:17:29.607766   34279 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0805 23:17:29.607805   34279 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0805 23:17:29.622708   34279 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44999
	I0805 23:17:29.623136   34279 main.go:141] libmachine: () Calling .GetVersion
	I0805 23:17:29.623717   34279 main.go:141] libmachine: Using API Version  1
	I0805 23:17:29.623740   34279 main.go:141] libmachine: () Calling .SetConfigRaw
	I0805 23:17:29.624029   34279 main.go:141] libmachine: () Calling .GetMachineName
	I0805 23:17:29.624226   34279 main.go:141] libmachine: (ha-044175-m03) Calling .GetState
	I0805 23:17:29.625642   34279 status.go:330] ha-044175-m03 host status = "Running" (err=<nil>)
	I0805 23:17:29.625660   34279 host.go:66] Checking if "ha-044175-m03" exists ...
	I0805 23:17:29.625983   34279 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0805 23:17:29.626035   34279 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0805 23:17:29.641024   34279 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36253
	I0805 23:17:29.641426   34279 main.go:141] libmachine: () Calling .GetVersion
	I0805 23:17:29.641912   34279 main.go:141] libmachine: Using API Version  1
	I0805 23:17:29.641932   34279 main.go:141] libmachine: () Calling .SetConfigRaw
	I0805 23:17:29.642307   34279 main.go:141] libmachine: () Calling .GetMachineName
	I0805 23:17:29.642503   34279 main.go:141] libmachine: (ha-044175-m03) Calling .GetIP
	I0805 23:17:29.645544   34279 main.go:141] libmachine: (ha-044175-m03) DBG | domain ha-044175-m03 has defined MAC address 52:54:00:f4:37:04 in network mk-ha-044175
	I0805 23:17:29.645930   34279 main.go:141] libmachine: (ha-044175-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f4:37:04", ip: ""} in network mk-ha-044175: {Iface:virbr1 ExpiryTime:2024-08-06 00:12:31 +0000 UTC Type:0 Mac:52:54:00:f4:37:04 Iaid: IPaddr:192.168.39.201 Prefix:24 Hostname:ha-044175-m03 Clientid:01:52:54:00:f4:37:04}
	I0805 23:17:29.645954   34279 main.go:141] libmachine: (ha-044175-m03) DBG | domain ha-044175-m03 has defined IP address 192.168.39.201 and MAC address 52:54:00:f4:37:04 in network mk-ha-044175
	I0805 23:17:29.646205   34279 host.go:66] Checking if "ha-044175-m03" exists ...
	I0805 23:17:29.646526   34279 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0805 23:17:29.646561   34279 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0805 23:17:29.662672   34279 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43997
	I0805 23:17:29.663171   34279 main.go:141] libmachine: () Calling .GetVersion
	I0805 23:17:29.663738   34279 main.go:141] libmachine: Using API Version  1
	I0805 23:17:29.663763   34279 main.go:141] libmachine: () Calling .SetConfigRaw
	I0805 23:17:29.664116   34279 main.go:141] libmachine: () Calling .GetMachineName
	I0805 23:17:29.664301   34279 main.go:141] libmachine: (ha-044175-m03) Calling .DriverName
	I0805 23:17:29.664489   34279 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0805 23:17:29.664510   34279 main.go:141] libmachine: (ha-044175-m03) Calling .GetSSHHostname
	I0805 23:17:29.667848   34279 main.go:141] libmachine: (ha-044175-m03) DBG | domain ha-044175-m03 has defined MAC address 52:54:00:f4:37:04 in network mk-ha-044175
	I0805 23:17:29.668396   34279 main.go:141] libmachine: (ha-044175-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f4:37:04", ip: ""} in network mk-ha-044175: {Iface:virbr1 ExpiryTime:2024-08-06 00:12:31 +0000 UTC Type:0 Mac:52:54:00:f4:37:04 Iaid: IPaddr:192.168.39.201 Prefix:24 Hostname:ha-044175-m03 Clientid:01:52:54:00:f4:37:04}
	I0805 23:17:29.668420   34279 main.go:141] libmachine: (ha-044175-m03) DBG | domain ha-044175-m03 has defined IP address 192.168.39.201 and MAC address 52:54:00:f4:37:04 in network mk-ha-044175
	I0805 23:17:29.668596   34279 main.go:141] libmachine: (ha-044175-m03) Calling .GetSSHPort
	I0805 23:17:29.668821   34279 main.go:141] libmachine: (ha-044175-m03) Calling .GetSSHKeyPath
	I0805 23:17:29.669027   34279 main.go:141] libmachine: (ha-044175-m03) Calling .GetSSHUsername
	I0805 23:17:29.669181   34279 sshutil.go:53] new ssh client: &{IP:192.168.39.201 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19373-9606/.minikube/machines/ha-044175-m03/id_rsa Username:docker}
	I0805 23:17:29.758799   34279 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0805 23:17:29.775758   34279 kubeconfig.go:125] found "ha-044175" server: "https://192.168.39.254:8443"
	I0805 23:17:29.775788   34279 api_server.go:166] Checking apiserver status ...
	I0805 23:17:29.775824   34279 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0805 23:17:29.790892   34279 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1566/cgroup
	W0805 23:17:29.803084   34279 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1566/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0805 23:17:29.803145   34279 ssh_runner.go:195] Run: ls
	I0805 23:17:29.807962   34279 api_server.go:253] Checking apiserver healthz at https://192.168.39.254:8443/healthz ...
	I0805 23:17:29.812495   34279 api_server.go:279] https://192.168.39.254:8443/healthz returned 200:
	ok
	I0805 23:17:29.812519   34279 status.go:422] ha-044175-m03 apiserver status = Running (err=<nil>)
	I0805 23:17:29.812527   34279 status.go:257] ha-044175-m03 status: &{Name:ha-044175-m03 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0805 23:17:29.812542   34279 status.go:255] checking status of ha-044175-m04 ...
	I0805 23:17:29.812834   34279 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0805 23:17:29.812882   34279 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0805 23:17:29.828761   34279 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34127
	I0805 23:17:29.829257   34279 main.go:141] libmachine: () Calling .GetVersion
	I0805 23:17:29.829727   34279 main.go:141] libmachine: Using API Version  1
	I0805 23:17:29.829748   34279 main.go:141] libmachine: () Calling .SetConfigRaw
	I0805 23:17:29.830140   34279 main.go:141] libmachine: () Calling .GetMachineName
	I0805 23:17:29.830332   34279 main.go:141] libmachine: (ha-044175-m04) Calling .GetState
	I0805 23:17:29.831907   34279 status.go:330] ha-044175-m04 host status = "Running" (err=<nil>)
	I0805 23:17:29.831925   34279 host.go:66] Checking if "ha-044175-m04" exists ...
	I0805 23:17:29.832251   34279 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0805 23:17:29.832289   34279 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0805 23:17:29.847592   34279 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34249
	I0805 23:17:29.847960   34279 main.go:141] libmachine: () Calling .GetVersion
	I0805 23:17:29.848397   34279 main.go:141] libmachine: Using API Version  1
	I0805 23:17:29.848420   34279 main.go:141] libmachine: () Calling .SetConfigRaw
	I0805 23:17:29.848708   34279 main.go:141] libmachine: () Calling .GetMachineName
	I0805 23:17:29.848919   34279 main.go:141] libmachine: (ha-044175-m04) Calling .GetIP
	I0805 23:17:29.851649   34279 main.go:141] libmachine: (ha-044175-m04) DBG | domain ha-044175-m04 has defined MAC address 52:54:00:e5:ba:4d in network mk-ha-044175
	I0805 23:17:29.852024   34279 main.go:141] libmachine: (ha-044175-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e5:ba:4d", ip: ""} in network mk-ha-044175: {Iface:virbr1 ExpiryTime:2024-08-06 00:13:59 +0000 UTC Type:0 Mac:52:54:00:e5:ba:4d Iaid: IPaddr:192.168.39.228 Prefix:24 Hostname:ha-044175-m04 Clientid:01:52:54:00:e5:ba:4d}
	I0805 23:17:29.852044   34279 main.go:141] libmachine: (ha-044175-m04) DBG | domain ha-044175-m04 has defined IP address 192.168.39.228 and MAC address 52:54:00:e5:ba:4d in network mk-ha-044175
	I0805 23:17:29.852196   34279 host.go:66] Checking if "ha-044175-m04" exists ...
	I0805 23:17:29.852520   34279 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0805 23:17:29.852561   34279 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0805 23:17:29.868422   34279 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44361
	I0805 23:17:29.868805   34279 main.go:141] libmachine: () Calling .GetVersion
	I0805 23:17:29.869337   34279 main.go:141] libmachine: Using API Version  1
	I0805 23:17:29.869360   34279 main.go:141] libmachine: () Calling .SetConfigRaw
	I0805 23:17:29.869650   34279 main.go:141] libmachine: () Calling .GetMachineName
	I0805 23:17:29.869910   34279 main.go:141] libmachine: (ha-044175-m04) Calling .DriverName
	I0805 23:17:29.870122   34279 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0805 23:17:29.870160   34279 main.go:141] libmachine: (ha-044175-m04) Calling .GetSSHHostname
	I0805 23:17:29.873277   34279 main.go:141] libmachine: (ha-044175-m04) DBG | domain ha-044175-m04 has defined MAC address 52:54:00:e5:ba:4d in network mk-ha-044175
	I0805 23:17:29.873727   34279 main.go:141] libmachine: (ha-044175-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e5:ba:4d", ip: ""} in network mk-ha-044175: {Iface:virbr1 ExpiryTime:2024-08-06 00:13:59 +0000 UTC Type:0 Mac:52:54:00:e5:ba:4d Iaid: IPaddr:192.168.39.228 Prefix:24 Hostname:ha-044175-m04 Clientid:01:52:54:00:e5:ba:4d}
	I0805 23:17:29.873751   34279 main.go:141] libmachine: (ha-044175-m04) DBG | domain ha-044175-m04 has defined IP address 192.168.39.228 and MAC address 52:54:00:e5:ba:4d in network mk-ha-044175
	I0805 23:17:29.873905   34279 main.go:141] libmachine: (ha-044175-m04) Calling .GetSSHPort
	I0805 23:17:29.874087   34279 main.go:141] libmachine: (ha-044175-m04) Calling .GetSSHKeyPath
	I0805 23:17:29.874272   34279 main.go:141] libmachine: (ha-044175-m04) Calling .GetSSHUsername
	I0805 23:17:29.874412   34279 sshutil.go:53] new ssh client: &{IP:192.168.39.228 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19373-9606/.minikube/machines/ha-044175-m04/id_rsa Username:docker}
	I0805 23:17:29.955358   34279 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0805 23:17:29.970455   34279 status.go:257] ha-044175-m04 status: &{Name:ha-044175-m04 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
ha_test.go:428: (dbg) Run:  out/minikube-linux-amd64 -p ha-044175 status -v=7 --alsologtostderr
ha_test.go:428: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ha-044175 status -v=7 --alsologtostderr: exit status 3 (4.175455802s)

                                                
                                                
-- stdout --
	ha-044175
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-044175-m02
	type: Control Plane
	host: Error
	kubelet: Nonexistent
	apiserver: Nonexistent
	kubeconfig: Configured
	
	ha-044175-m03
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-044175-m04
	type: Worker
	host: Running
	kubelet: Running
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0805 23:17:32.230822   34398 out.go:291] Setting OutFile to fd 1 ...
	I0805 23:17:32.230955   34398 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0805 23:17:32.230966   34398 out.go:304] Setting ErrFile to fd 2...
	I0805 23:17:32.230973   34398 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0805 23:17:32.231269   34398 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19373-9606/.minikube/bin
	I0805 23:17:32.231445   34398 out.go:298] Setting JSON to false
	I0805 23:17:32.231469   34398 mustload.go:65] Loading cluster: ha-044175
	I0805 23:17:32.231570   34398 notify.go:220] Checking for updates...
	I0805 23:17:32.231791   34398 config.go:182] Loaded profile config "ha-044175": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0805 23:17:32.231804   34398 status.go:255] checking status of ha-044175 ...
	I0805 23:17:32.232195   34398 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0805 23:17:32.232244   34398 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0805 23:17:32.251425   34398 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38833
	I0805 23:17:32.251936   34398 main.go:141] libmachine: () Calling .GetVersion
	I0805 23:17:32.252517   34398 main.go:141] libmachine: Using API Version  1
	I0805 23:17:32.252541   34398 main.go:141] libmachine: () Calling .SetConfigRaw
	I0805 23:17:32.252937   34398 main.go:141] libmachine: () Calling .GetMachineName
	I0805 23:17:32.253124   34398 main.go:141] libmachine: (ha-044175) Calling .GetState
	I0805 23:17:32.254880   34398 status.go:330] ha-044175 host status = "Running" (err=<nil>)
	I0805 23:17:32.254906   34398 host.go:66] Checking if "ha-044175" exists ...
	I0805 23:17:32.255234   34398 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0805 23:17:32.255266   34398 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0805 23:17:32.270923   34398 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40305
	I0805 23:17:32.271387   34398 main.go:141] libmachine: () Calling .GetVersion
	I0805 23:17:32.271853   34398 main.go:141] libmachine: Using API Version  1
	I0805 23:17:32.271872   34398 main.go:141] libmachine: () Calling .SetConfigRaw
	I0805 23:17:32.272160   34398 main.go:141] libmachine: () Calling .GetMachineName
	I0805 23:17:32.272343   34398 main.go:141] libmachine: (ha-044175) Calling .GetIP
	I0805 23:17:32.275759   34398 main.go:141] libmachine: (ha-044175) DBG | domain ha-044175 has defined MAC address 52:54:00:d0:5f:e4 in network mk-ha-044175
	I0805 23:17:32.276215   34398 main.go:141] libmachine: (ha-044175) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d0:5f:e4", ip: ""} in network mk-ha-044175: {Iface:virbr1 ExpiryTime:2024-08-06 00:10:15 +0000 UTC Type:0 Mac:52:54:00:d0:5f:e4 Iaid: IPaddr:192.168.39.57 Prefix:24 Hostname:ha-044175 Clientid:01:52:54:00:d0:5f:e4}
	I0805 23:17:32.276254   34398 main.go:141] libmachine: (ha-044175) DBG | domain ha-044175 has defined IP address 192.168.39.57 and MAC address 52:54:00:d0:5f:e4 in network mk-ha-044175
	I0805 23:17:32.276322   34398 host.go:66] Checking if "ha-044175" exists ...
	I0805 23:17:32.276685   34398 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0805 23:17:32.276734   34398 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0805 23:17:32.291880   34398 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39647
	I0805 23:17:32.292315   34398 main.go:141] libmachine: () Calling .GetVersion
	I0805 23:17:32.292738   34398 main.go:141] libmachine: Using API Version  1
	I0805 23:17:32.292764   34398 main.go:141] libmachine: () Calling .SetConfigRaw
	I0805 23:17:32.293081   34398 main.go:141] libmachine: () Calling .GetMachineName
	I0805 23:17:32.293340   34398 main.go:141] libmachine: (ha-044175) Calling .DriverName
	I0805 23:17:32.293511   34398 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0805 23:17:32.293541   34398 main.go:141] libmachine: (ha-044175) Calling .GetSSHHostname
	I0805 23:17:32.296314   34398 main.go:141] libmachine: (ha-044175) DBG | domain ha-044175 has defined MAC address 52:54:00:d0:5f:e4 in network mk-ha-044175
	I0805 23:17:32.296723   34398 main.go:141] libmachine: (ha-044175) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d0:5f:e4", ip: ""} in network mk-ha-044175: {Iface:virbr1 ExpiryTime:2024-08-06 00:10:15 +0000 UTC Type:0 Mac:52:54:00:d0:5f:e4 Iaid: IPaddr:192.168.39.57 Prefix:24 Hostname:ha-044175 Clientid:01:52:54:00:d0:5f:e4}
	I0805 23:17:32.296751   34398 main.go:141] libmachine: (ha-044175) DBG | domain ha-044175 has defined IP address 192.168.39.57 and MAC address 52:54:00:d0:5f:e4 in network mk-ha-044175
	I0805 23:17:32.296866   34398 main.go:141] libmachine: (ha-044175) Calling .GetSSHPort
	I0805 23:17:32.297105   34398 main.go:141] libmachine: (ha-044175) Calling .GetSSHKeyPath
	I0805 23:17:32.297284   34398 main.go:141] libmachine: (ha-044175) Calling .GetSSHUsername
	I0805 23:17:32.297441   34398 sshutil.go:53] new ssh client: &{IP:192.168.39.57 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19373-9606/.minikube/machines/ha-044175/id_rsa Username:docker}
	I0805 23:17:32.375909   34398 ssh_runner.go:195] Run: systemctl --version
	I0805 23:17:32.382733   34398 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0805 23:17:32.399819   34398 kubeconfig.go:125] found "ha-044175" server: "https://192.168.39.254:8443"
	I0805 23:17:32.399850   34398 api_server.go:166] Checking apiserver status ...
	I0805 23:17:32.399901   34398 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0805 23:17:32.416751   34398 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1201/cgroup
	W0805 23:17:32.426814   34398 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1201/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0805 23:17:32.426870   34398 ssh_runner.go:195] Run: ls
	I0805 23:17:32.431618   34398 api_server.go:253] Checking apiserver healthz at https://192.168.39.254:8443/healthz ...
	I0805 23:17:32.435876   34398 api_server.go:279] https://192.168.39.254:8443/healthz returned 200:
	ok
	I0805 23:17:32.435902   34398 status.go:422] ha-044175 apiserver status = Running (err=<nil>)
	I0805 23:17:32.435912   34398 status.go:257] ha-044175 status: &{Name:ha-044175 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0805 23:17:32.435929   34398 status.go:255] checking status of ha-044175-m02 ...
	I0805 23:17:32.436243   34398 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0805 23:17:32.436278   34398 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0805 23:17:32.451599   34398 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45889
	I0805 23:17:32.452064   34398 main.go:141] libmachine: () Calling .GetVersion
	I0805 23:17:32.452541   34398 main.go:141] libmachine: Using API Version  1
	I0805 23:17:32.452563   34398 main.go:141] libmachine: () Calling .SetConfigRaw
	I0805 23:17:32.452876   34398 main.go:141] libmachine: () Calling .GetMachineName
	I0805 23:17:32.453079   34398 main.go:141] libmachine: (ha-044175-m02) Calling .GetState
	I0805 23:17:32.454650   34398 status.go:330] ha-044175-m02 host status = "Running" (err=<nil>)
	I0805 23:17:32.454665   34398 host.go:66] Checking if "ha-044175-m02" exists ...
	I0805 23:17:32.454952   34398 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0805 23:17:32.454990   34398 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0805 23:17:32.469500   34398 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37405
	I0805 23:17:32.469897   34398 main.go:141] libmachine: () Calling .GetVersion
	I0805 23:17:32.470355   34398 main.go:141] libmachine: Using API Version  1
	I0805 23:17:32.470379   34398 main.go:141] libmachine: () Calling .SetConfigRaw
	I0805 23:17:32.470674   34398 main.go:141] libmachine: () Calling .GetMachineName
	I0805 23:17:32.470838   34398 main.go:141] libmachine: (ha-044175-m02) Calling .GetIP
	I0805 23:17:32.473295   34398 main.go:141] libmachine: (ha-044175-m02) DBG | domain ha-044175-m02 has defined MAC address 52:54:00:84:bb:47 in network mk-ha-044175
	I0805 23:17:32.473655   34398 main.go:141] libmachine: (ha-044175-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:84:bb:47", ip: ""} in network mk-ha-044175: {Iface:virbr1 ExpiryTime:2024-08-06 00:11:13 +0000 UTC Type:0 Mac:52:54:00:84:bb:47 Iaid: IPaddr:192.168.39.112 Prefix:24 Hostname:ha-044175-m02 Clientid:01:52:54:00:84:bb:47}
	I0805 23:17:32.473678   34398 main.go:141] libmachine: (ha-044175-m02) DBG | domain ha-044175-m02 has defined IP address 192.168.39.112 and MAC address 52:54:00:84:bb:47 in network mk-ha-044175
	I0805 23:17:32.473791   34398 host.go:66] Checking if "ha-044175-m02" exists ...
	I0805 23:17:32.474130   34398 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0805 23:17:32.474184   34398 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0805 23:17:32.488822   34398 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43447
	I0805 23:17:32.489218   34398 main.go:141] libmachine: () Calling .GetVersion
	I0805 23:17:32.489649   34398 main.go:141] libmachine: Using API Version  1
	I0805 23:17:32.489674   34398 main.go:141] libmachine: () Calling .SetConfigRaw
	I0805 23:17:32.489967   34398 main.go:141] libmachine: () Calling .GetMachineName
	I0805 23:17:32.490165   34398 main.go:141] libmachine: (ha-044175-m02) Calling .DriverName
	I0805 23:17:32.490330   34398 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0805 23:17:32.490350   34398 main.go:141] libmachine: (ha-044175-m02) Calling .GetSSHHostname
	I0805 23:17:32.493033   34398 main.go:141] libmachine: (ha-044175-m02) DBG | domain ha-044175-m02 has defined MAC address 52:54:00:84:bb:47 in network mk-ha-044175
	I0805 23:17:32.493526   34398 main.go:141] libmachine: (ha-044175-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:84:bb:47", ip: ""} in network mk-ha-044175: {Iface:virbr1 ExpiryTime:2024-08-06 00:11:13 +0000 UTC Type:0 Mac:52:54:00:84:bb:47 Iaid: IPaddr:192.168.39.112 Prefix:24 Hostname:ha-044175-m02 Clientid:01:52:54:00:84:bb:47}
	I0805 23:17:32.493552   34398 main.go:141] libmachine: (ha-044175-m02) DBG | domain ha-044175-m02 has defined IP address 192.168.39.112 and MAC address 52:54:00:84:bb:47 in network mk-ha-044175
	I0805 23:17:32.493693   34398 main.go:141] libmachine: (ha-044175-m02) Calling .GetSSHPort
	I0805 23:17:32.493849   34398 main.go:141] libmachine: (ha-044175-m02) Calling .GetSSHKeyPath
	I0805 23:17:32.493969   34398 main.go:141] libmachine: (ha-044175-m02) Calling .GetSSHUsername
	I0805 23:17:32.494093   34398 sshutil.go:53] new ssh client: &{IP:192.168.39.112 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19373-9606/.minikube/machines/ha-044175-m02/id_rsa Username:docker}
	W0805 23:17:32.683251   34398 sshutil.go:64] dial failure (will retry): dial tcp 192.168.39.112:22: connect: no route to host
	I0805 23:17:32.683306   34398 retry.go:31] will retry after 253.22711ms: dial tcp 192.168.39.112:22: connect: no route to host
	W0805 23:17:36.007304   34398 sshutil.go:64] dial failure (will retry): dial tcp 192.168.39.112:22: connect: no route to host
	W0805 23:17:36.007391   34398 start.go:268] error running df -h /var: NewSession: new client: new client: dial tcp 192.168.39.112:22: connect: no route to host
	E0805 23:17:36.007409   34398 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.39.112:22: connect: no route to host
	I0805 23:17:36.007418   34398 status.go:257] ha-044175-m02 status: &{Name:ha-044175-m02 Host:Error Kubelet:Nonexistent APIServer:Nonexistent Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	E0805 23:17:36.007452   34398 status.go:260] status error: NewSession: new client: new client: dial tcp 192.168.39.112:22: connect: no route to host
	I0805 23:17:36.007459   34398 status.go:255] checking status of ha-044175-m03 ...
	I0805 23:17:36.007742   34398 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0805 23:17:36.007781   34398 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0805 23:17:36.022192   34398 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40799
	I0805 23:17:36.022600   34398 main.go:141] libmachine: () Calling .GetVersion
	I0805 23:17:36.023086   34398 main.go:141] libmachine: Using API Version  1
	I0805 23:17:36.023108   34398 main.go:141] libmachine: () Calling .SetConfigRaw
	I0805 23:17:36.023416   34398 main.go:141] libmachine: () Calling .GetMachineName
	I0805 23:17:36.023638   34398 main.go:141] libmachine: (ha-044175-m03) Calling .GetState
	I0805 23:17:36.025218   34398 status.go:330] ha-044175-m03 host status = "Running" (err=<nil>)
	I0805 23:17:36.025237   34398 host.go:66] Checking if "ha-044175-m03" exists ...
	I0805 23:17:36.025531   34398 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0805 23:17:36.025562   34398 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0805 23:17:36.040962   34398 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39487
	I0805 23:17:36.041429   34398 main.go:141] libmachine: () Calling .GetVersion
	I0805 23:17:36.041937   34398 main.go:141] libmachine: Using API Version  1
	I0805 23:17:36.041952   34398 main.go:141] libmachine: () Calling .SetConfigRaw
	I0805 23:17:36.042259   34398 main.go:141] libmachine: () Calling .GetMachineName
	I0805 23:17:36.042434   34398 main.go:141] libmachine: (ha-044175-m03) Calling .GetIP
	I0805 23:17:36.045294   34398 main.go:141] libmachine: (ha-044175-m03) DBG | domain ha-044175-m03 has defined MAC address 52:54:00:f4:37:04 in network mk-ha-044175
	I0805 23:17:36.045668   34398 main.go:141] libmachine: (ha-044175-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f4:37:04", ip: ""} in network mk-ha-044175: {Iface:virbr1 ExpiryTime:2024-08-06 00:12:31 +0000 UTC Type:0 Mac:52:54:00:f4:37:04 Iaid: IPaddr:192.168.39.201 Prefix:24 Hostname:ha-044175-m03 Clientid:01:52:54:00:f4:37:04}
	I0805 23:17:36.045710   34398 main.go:141] libmachine: (ha-044175-m03) DBG | domain ha-044175-m03 has defined IP address 192.168.39.201 and MAC address 52:54:00:f4:37:04 in network mk-ha-044175
	I0805 23:17:36.045832   34398 host.go:66] Checking if "ha-044175-m03" exists ...
	I0805 23:17:36.046284   34398 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0805 23:17:36.046329   34398 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0805 23:17:36.060587   34398 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43125
	I0805 23:17:36.060946   34398 main.go:141] libmachine: () Calling .GetVersion
	I0805 23:17:36.061412   34398 main.go:141] libmachine: Using API Version  1
	I0805 23:17:36.061430   34398 main.go:141] libmachine: () Calling .SetConfigRaw
	I0805 23:17:36.061726   34398 main.go:141] libmachine: () Calling .GetMachineName
	I0805 23:17:36.061908   34398 main.go:141] libmachine: (ha-044175-m03) Calling .DriverName
	I0805 23:17:36.062103   34398 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0805 23:17:36.062121   34398 main.go:141] libmachine: (ha-044175-m03) Calling .GetSSHHostname
	I0805 23:17:36.064703   34398 main.go:141] libmachine: (ha-044175-m03) DBG | domain ha-044175-m03 has defined MAC address 52:54:00:f4:37:04 in network mk-ha-044175
	I0805 23:17:36.065070   34398 main.go:141] libmachine: (ha-044175-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f4:37:04", ip: ""} in network mk-ha-044175: {Iface:virbr1 ExpiryTime:2024-08-06 00:12:31 +0000 UTC Type:0 Mac:52:54:00:f4:37:04 Iaid: IPaddr:192.168.39.201 Prefix:24 Hostname:ha-044175-m03 Clientid:01:52:54:00:f4:37:04}
	I0805 23:17:36.065103   34398 main.go:141] libmachine: (ha-044175-m03) DBG | domain ha-044175-m03 has defined IP address 192.168.39.201 and MAC address 52:54:00:f4:37:04 in network mk-ha-044175
	I0805 23:17:36.065191   34398 main.go:141] libmachine: (ha-044175-m03) Calling .GetSSHPort
	I0805 23:17:36.065339   34398 main.go:141] libmachine: (ha-044175-m03) Calling .GetSSHKeyPath
	I0805 23:17:36.065470   34398 main.go:141] libmachine: (ha-044175-m03) Calling .GetSSHUsername
	I0805 23:17:36.065597   34398 sshutil.go:53] new ssh client: &{IP:192.168.39.201 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19373-9606/.minikube/machines/ha-044175-m03/id_rsa Username:docker}
	I0805 23:17:36.151832   34398 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0805 23:17:36.165803   34398 kubeconfig.go:125] found "ha-044175" server: "https://192.168.39.254:8443"
	I0805 23:17:36.165830   34398 api_server.go:166] Checking apiserver status ...
	I0805 23:17:36.165868   34398 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0805 23:17:36.188576   34398 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1566/cgroup
	W0805 23:17:36.200320   34398 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1566/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0805 23:17:36.200377   34398 ssh_runner.go:195] Run: ls
	I0805 23:17:36.204985   34398 api_server.go:253] Checking apiserver healthz at https://192.168.39.254:8443/healthz ...
	I0805 23:17:36.210939   34398 api_server.go:279] https://192.168.39.254:8443/healthz returned 200:
	ok
	I0805 23:17:36.210962   34398 status.go:422] ha-044175-m03 apiserver status = Running (err=<nil>)
	I0805 23:17:36.210969   34398 status.go:257] ha-044175-m03 status: &{Name:ha-044175-m03 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0805 23:17:36.210987   34398 status.go:255] checking status of ha-044175-m04 ...
	I0805 23:17:36.211323   34398 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0805 23:17:36.211356   34398 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0805 23:17:36.226260   34398 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38617
	I0805 23:17:36.226703   34398 main.go:141] libmachine: () Calling .GetVersion
	I0805 23:17:36.227251   34398 main.go:141] libmachine: Using API Version  1
	I0805 23:17:36.227272   34398 main.go:141] libmachine: () Calling .SetConfigRaw
	I0805 23:17:36.227649   34398 main.go:141] libmachine: () Calling .GetMachineName
	I0805 23:17:36.227802   34398 main.go:141] libmachine: (ha-044175-m04) Calling .GetState
	I0805 23:17:36.229394   34398 status.go:330] ha-044175-m04 host status = "Running" (err=<nil>)
	I0805 23:17:36.229409   34398 host.go:66] Checking if "ha-044175-m04" exists ...
	I0805 23:17:36.229757   34398 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0805 23:17:36.229793   34398 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0805 23:17:36.244444   34398 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40485
	I0805 23:17:36.244937   34398 main.go:141] libmachine: () Calling .GetVersion
	I0805 23:17:36.245461   34398 main.go:141] libmachine: Using API Version  1
	I0805 23:17:36.245485   34398 main.go:141] libmachine: () Calling .SetConfigRaw
	I0805 23:17:36.245780   34398 main.go:141] libmachine: () Calling .GetMachineName
	I0805 23:17:36.245957   34398 main.go:141] libmachine: (ha-044175-m04) Calling .GetIP
	I0805 23:17:36.248903   34398 main.go:141] libmachine: (ha-044175-m04) DBG | domain ha-044175-m04 has defined MAC address 52:54:00:e5:ba:4d in network mk-ha-044175
	I0805 23:17:36.249467   34398 main.go:141] libmachine: (ha-044175-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e5:ba:4d", ip: ""} in network mk-ha-044175: {Iface:virbr1 ExpiryTime:2024-08-06 00:13:59 +0000 UTC Type:0 Mac:52:54:00:e5:ba:4d Iaid: IPaddr:192.168.39.228 Prefix:24 Hostname:ha-044175-m04 Clientid:01:52:54:00:e5:ba:4d}
	I0805 23:17:36.249500   34398 main.go:141] libmachine: (ha-044175-m04) DBG | domain ha-044175-m04 has defined IP address 192.168.39.228 and MAC address 52:54:00:e5:ba:4d in network mk-ha-044175
	I0805 23:17:36.249673   34398 host.go:66] Checking if "ha-044175-m04" exists ...
	I0805 23:17:36.249973   34398 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0805 23:17:36.250016   34398 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0805 23:17:36.265087   34398 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40357
	I0805 23:17:36.265442   34398 main.go:141] libmachine: () Calling .GetVersion
	I0805 23:17:36.265937   34398 main.go:141] libmachine: Using API Version  1
	I0805 23:17:36.265961   34398 main.go:141] libmachine: () Calling .SetConfigRaw
	I0805 23:17:36.266265   34398 main.go:141] libmachine: () Calling .GetMachineName
	I0805 23:17:36.266459   34398 main.go:141] libmachine: (ha-044175-m04) Calling .DriverName
	I0805 23:17:36.266606   34398 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0805 23:17:36.266633   34398 main.go:141] libmachine: (ha-044175-m04) Calling .GetSSHHostname
	I0805 23:17:36.269236   34398 main.go:141] libmachine: (ha-044175-m04) DBG | domain ha-044175-m04 has defined MAC address 52:54:00:e5:ba:4d in network mk-ha-044175
	I0805 23:17:36.269598   34398 main.go:141] libmachine: (ha-044175-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e5:ba:4d", ip: ""} in network mk-ha-044175: {Iface:virbr1 ExpiryTime:2024-08-06 00:13:59 +0000 UTC Type:0 Mac:52:54:00:e5:ba:4d Iaid: IPaddr:192.168.39.228 Prefix:24 Hostname:ha-044175-m04 Clientid:01:52:54:00:e5:ba:4d}
	I0805 23:17:36.269622   34398 main.go:141] libmachine: (ha-044175-m04) DBG | domain ha-044175-m04 has defined IP address 192.168.39.228 and MAC address 52:54:00:e5:ba:4d in network mk-ha-044175
	I0805 23:17:36.269755   34398 main.go:141] libmachine: (ha-044175-m04) Calling .GetSSHPort
	I0805 23:17:36.269921   34398 main.go:141] libmachine: (ha-044175-m04) Calling .GetSSHKeyPath
	I0805 23:17:36.270074   34398 main.go:141] libmachine: (ha-044175-m04) Calling .GetSSHUsername
	I0805 23:17:36.270209   34398 sshutil.go:53] new ssh client: &{IP:192.168.39.228 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19373-9606/.minikube/machines/ha-044175-m04/id_rsa Username:docker}
	I0805 23:17:36.350971   34398 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0805 23:17:36.365035   34398 status.go:257] ha-044175-m04 status: &{Name:ha-044175-m04 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
ha_test.go:428: (dbg) Run:  out/minikube-linux-amd64 -p ha-044175 status -v=7 --alsologtostderr
ha_test.go:428: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ha-044175 status -v=7 --alsologtostderr: exit status 3 (3.749723761s)

                                                
                                                
-- stdout --
	ha-044175
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-044175-m02
	type: Control Plane
	host: Error
	kubelet: Nonexistent
	apiserver: Nonexistent
	kubeconfig: Configured
	
	ha-044175-m03
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-044175-m04
	type: Worker
	host: Running
	kubelet: Running
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0805 23:17:39.279028   34500 out.go:291] Setting OutFile to fd 1 ...
	I0805 23:17:39.279325   34500 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0805 23:17:39.279335   34500 out.go:304] Setting ErrFile to fd 2...
	I0805 23:17:39.279346   34500 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0805 23:17:39.279565   34500 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19373-9606/.minikube/bin
	I0805 23:17:39.279777   34500 out.go:298] Setting JSON to false
	I0805 23:17:39.279800   34500 mustload.go:65] Loading cluster: ha-044175
	I0805 23:17:39.279909   34500 notify.go:220] Checking for updates...
	I0805 23:17:39.280244   34500 config.go:182] Loaded profile config "ha-044175": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0805 23:17:39.280258   34500 status.go:255] checking status of ha-044175 ...
	I0805 23:17:39.280656   34500 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0805 23:17:39.280699   34500 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0805 23:17:39.296292   34500 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46869
	I0805 23:17:39.296731   34500 main.go:141] libmachine: () Calling .GetVersion
	I0805 23:17:39.297393   34500 main.go:141] libmachine: Using API Version  1
	I0805 23:17:39.297451   34500 main.go:141] libmachine: () Calling .SetConfigRaw
	I0805 23:17:39.297905   34500 main.go:141] libmachine: () Calling .GetMachineName
	I0805 23:17:39.298127   34500 main.go:141] libmachine: (ha-044175) Calling .GetState
	I0805 23:17:39.299621   34500 status.go:330] ha-044175 host status = "Running" (err=<nil>)
	I0805 23:17:39.299656   34500 host.go:66] Checking if "ha-044175" exists ...
	I0805 23:17:39.299927   34500 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0805 23:17:39.299962   34500 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0805 23:17:39.314762   34500 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38009
	I0805 23:17:39.315267   34500 main.go:141] libmachine: () Calling .GetVersion
	I0805 23:17:39.315800   34500 main.go:141] libmachine: Using API Version  1
	I0805 23:17:39.315826   34500 main.go:141] libmachine: () Calling .SetConfigRaw
	I0805 23:17:39.316100   34500 main.go:141] libmachine: () Calling .GetMachineName
	I0805 23:17:39.316283   34500 main.go:141] libmachine: (ha-044175) Calling .GetIP
	I0805 23:17:39.318779   34500 main.go:141] libmachine: (ha-044175) DBG | domain ha-044175 has defined MAC address 52:54:00:d0:5f:e4 in network mk-ha-044175
	I0805 23:17:39.319234   34500 main.go:141] libmachine: (ha-044175) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d0:5f:e4", ip: ""} in network mk-ha-044175: {Iface:virbr1 ExpiryTime:2024-08-06 00:10:15 +0000 UTC Type:0 Mac:52:54:00:d0:5f:e4 Iaid: IPaddr:192.168.39.57 Prefix:24 Hostname:ha-044175 Clientid:01:52:54:00:d0:5f:e4}
	I0805 23:17:39.319259   34500 main.go:141] libmachine: (ha-044175) DBG | domain ha-044175 has defined IP address 192.168.39.57 and MAC address 52:54:00:d0:5f:e4 in network mk-ha-044175
	I0805 23:17:39.319437   34500 host.go:66] Checking if "ha-044175" exists ...
	I0805 23:17:39.319832   34500 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0805 23:17:39.319889   34500 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0805 23:17:39.334706   34500 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39255
	I0805 23:17:39.335112   34500 main.go:141] libmachine: () Calling .GetVersion
	I0805 23:17:39.335534   34500 main.go:141] libmachine: Using API Version  1
	I0805 23:17:39.335553   34500 main.go:141] libmachine: () Calling .SetConfigRaw
	I0805 23:17:39.335904   34500 main.go:141] libmachine: () Calling .GetMachineName
	I0805 23:17:39.336084   34500 main.go:141] libmachine: (ha-044175) Calling .DriverName
	I0805 23:17:39.336276   34500 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0805 23:17:39.336314   34500 main.go:141] libmachine: (ha-044175) Calling .GetSSHHostname
	I0805 23:17:39.338785   34500 main.go:141] libmachine: (ha-044175) DBG | domain ha-044175 has defined MAC address 52:54:00:d0:5f:e4 in network mk-ha-044175
	I0805 23:17:39.339198   34500 main.go:141] libmachine: (ha-044175) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d0:5f:e4", ip: ""} in network mk-ha-044175: {Iface:virbr1 ExpiryTime:2024-08-06 00:10:15 +0000 UTC Type:0 Mac:52:54:00:d0:5f:e4 Iaid: IPaddr:192.168.39.57 Prefix:24 Hostname:ha-044175 Clientid:01:52:54:00:d0:5f:e4}
	I0805 23:17:39.339220   34500 main.go:141] libmachine: (ha-044175) DBG | domain ha-044175 has defined IP address 192.168.39.57 and MAC address 52:54:00:d0:5f:e4 in network mk-ha-044175
	I0805 23:17:39.339340   34500 main.go:141] libmachine: (ha-044175) Calling .GetSSHPort
	I0805 23:17:39.339496   34500 main.go:141] libmachine: (ha-044175) Calling .GetSSHKeyPath
	I0805 23:17:39.339648   34500 main.go:141] libmachine: (ha-044175) Calling .GetSSHUsername
	I0805 23:17:39.339774   34500 sshutil.go:53] new ssh client: &{IP:192.168.39.57 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19373-9606/.minikube/machines/ha-044175/id_rsa Username:docker}
	I0805 23:17:39.420146   34500 ssh_runner.go:195] Run: systemctl --version
	I0805 23:17:39.427863   34500 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0805 23:17:39.446724   34500 kubeconfig.go:125] found "ha-044175" server: "https://192.168.39.254:8443"
	I0805 23:17:39.446759   34500 api_server.go:166] Checking apiserver status ...
	I0805 23:17:39.446835   34500 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0805 23:17:39.461687   34500 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1201/cgroup
	W0805 23:17:39.472819   34500 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1201/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0805 23:17:39.472879   34500 ssh_runner.go:195] Run: ls
	I0805 23:17:39.478248   34500 api_server.go:253] Checking apiserver healthz at https://192.168.39.254:8443/healthz ...
	I0805 23:17:39.482344   34500 api_server.go:279] https://192.168.39.254:8443/healthz returned 200:
	ok
	I0805 23:17:39.482368   34500 status.go:422] ha-044175 apiserver status = Running (err=<nil>)
	I0805 23:17:39.482377   34500 status.go:257] ha-044175 status: &{Name:ha-044175 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0805 23:17:39.482391   34500 status.go:255] checking status of ha-044175-m02 ...
	I0805 23:17:39.482675   34500 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0805 23:17:39.482704   34500 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0805 23:17:39.497852   34500 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35827
	I0805 23:17:39.498334   34500 main.go:141] libmachine: () Calling .GetVersion
	I0805 23:17:39.498813   34500 main.go:141] libmachine: Using API Version  1
	I0805 23:17:39.498836   34500 main.go:141] libmachine: () Calling .SetConfigRaw
	I0805 23:17:39.499165   34500 main.go:141] libmachine: () Calling .GetMachineName
	I0805 23:17:39.499352   34500 main.go:141] libmachine: (ha-044175-m02) Calling .GetState
	I0805 23:17:39.500846   34500 status.go:330] ha-044175-m02 host status = "Running" (err=<nil>)
	I0805 23:17:39.500872   34500 host.go:66] Checking if "ha-044175-m02" exists ...
	I0805 23:17:39.501198   34500 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0805 23:17:39.501244   34500 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0805 23:17:39.516640   34500 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40153
	I0805 23:17:39.517040   34500 main.go:141] libmachine: () Calling .GetVersion
	I0805 23:17:39.517533   34500 main.go:141] libmachine: Using API Version  1
	I0805 23:17:39.517552   34500 main.go:141] libmachine: () Calling .SetConfigRaw
	I0805 23:17:39.517830   34500 main.go:141] libmachine: () Calling .GetMachineName
	I0805 23:17:39.518004   34500 main.go:141] libmachine: (ha-044175-m02) Calling .GetIP
	I0805 23:17:39.520739   34500 main.go:141] libmachine: (ha-044175-m02) DBG | domain ha-044175-m02 has defined MAC address 52:54:00:84:bb:47 in network mk-ha-044175
	I0805 23:17:39.521199   34500 main.go:141] libmachine: (ha-044175-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:84:bb:47", ip: ""} in network mk-ha-044175: {Iface:virbr1 ExpiryTime:2024-08-06 00:11:13 +0000 UTC Type:0 Mac:52:54:00:84:bb:47 Iaid: IPaddr:192.168.39.112 Prefix:24 Hostname:ha-044175-m02 Clientid:01:52:54:00:84:bb:47}
	I0805 23:17:39.521221   34500 main.go:141] libmachine: (ha-044175-m02) DBG | domain ha-044175-m02 has defined IP address 192.168.39.112 and MAC address 52:54:00:84:bb:47 in network mk-ha-044175
	I0805 23:17:39.521351   34500 host.go:66] Checking if "ha-044175-m02" exists ...
	I0805 23:17:39.521639   34500 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0805 23:17:39.521677   34500 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0805 23:17:39.536888   34500 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44235
	I0805 23:17:39.537270   34500 main.go:141] libmachine: () Calling .GetVersion
	I0805 23:17:39.537746   34500 main.go:141] libmachine: Using API Version  1
	I0805 23:17:39.537765   34500 main.go:141] libmachine: () Calling .SetConfigRaw
	I0805 23:17:39.538045   34500 main.go:141] libmachine: () Calling .GetMachineName
	I0805 23:17:39.538273   34500 main.go:141] libmachine: (ha-044175-m02) Calling .DriverName
	I0805 23:17:39.538500   34500 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0805 23:17:39.538518   34500 main.go:141] libmachine: (ha-044175-m02) Calling .GetSSHHostname
	I0805 23:17:39.541327   34500 main.go:141] libmachine: (ha-044175-m02) DBG | domain ha-044175-m02 has defined MAC address 52:54:00:84:bb:47 in network mk-ha-044175
	I0805 23:17:39.541754   34500 main.go:141] libmachine: (ha-044175-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:84:bb:47", ip: ""} in network mk-ha-044175: {Iface:virbr1 ExpiryTime:2024-08-06 00:11:13 +0000 UTC Type:0 Mac:52:54:00:84:bb:47 Iaid: IPaddr:192.168.39.112 Prefix:24 Hostname:ha-044175-m02 Clientid:01:52:54:00:84:bb:47}
	I0805 23:17:39.541780   34500 main.go:141] libmachine: (ha-044175-m02) DBG | domain ha-044175-m02 has defined IP address 192.168.39.112 and MAC address 52:54:00:84:bb:47 in network mk-ha-044175
	I0805 23:17:39.541962   34500 main.go:141] libmachine: (ha-044175-m02) Calling .GetSSHPort
	I0805 23:17:39.542132   34500 main.go:141] libmachine: (ha-044175-m02) Calling .GetSSHKeyPath
	I0805 23:17:39.542281   34500 main.go:141] libmachine: (ha-044175-m02) Calling .GetSSHUsername
	I0805 23:17:39.542392   34500 sshutil.go:53] new ssh client: &{IP:192.168.39.112 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19373-9606/.minikube/machines/ha-044175-m02/id_rsa Username:docker}
	W0805 23:17:42.599270   34500 sshutil.go:64] dial failure (will retry): dial tcp 192.168.39.112:22: connect: no route to host
	W0805 23:17:42.599356   34500 start.go:268] error running df -h /var: NewSession: new client: new client: dial tcp 192.168.39.112:22: connect: no route to host
	E0805 23:17:42.599372   34500 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.39.112:22: connect: no route to host
	I0805 23:17:42.599380   34500 status.go:257] ha-044175-m02 status: &{Name:ha-044175-m02 Host:Error Kubelet:Nonexistent APIServer:Nonexistent Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	E0805 23:17:42.599396   34500 status.go:260] status error: NewSession: new client: new client: dial tcp 192.168.39.112:22: connect: no route to host
	I0805 23:17:42.599403   34500 status.go:255] checking status of ha-044175-m03 ...
	I0805 23:17:42.599709   34500 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0805 23:17:42.599753   34500 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0805 23:17:42.616288   34500 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36679
	I0805 23:17:42.616669   34500 main.go:141] libmachine: () Calling .GetVersion
	I0805 23:17:42.617093   34500 main.go:141] libmachine: Using API Version  1
	I0805 23:17:42.617116   34500 main.go:141] libmachine: () Calling .SetConfigRaw
	I0805 23:17:42.617399   34500 main.go:141] libmachine: () Calling .GetMachineName
	I0805 23:17:42.617561   34500 main.go:141] libmachine: (ha-044175-m03) Calling .GetState
	I0805 23:17:42.619058   34500 status.go:330] ha-044175-m03 host status = "Running" (err=<nil>)
	I0805 23:17:42.619077   34500 host.go:66] Checking if "ha-044175-m03" exists ...
	I0805 23:17:42.619371   34500 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0805 23:17:42.619408   34500 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0805 23:17:42.633845   34500 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43459
	I0805 23:17:42.634249   34500 main.go:141] libmachine: () Calling .GetVersion
	I0805 23:17:42.634679   34500 main.go:141] libmachine: Using API Version  1
	I0805 23:17:42.634701   34500 main.go:141] libmachine: () Calling .SetConfigRaw
	I0805 23:17:42.635008   34500 main.go:141] libmachine: () Calling .GetMachineName
	I0805 23:17:42.635238   34500 main.go:141] libmachine: (ha-044175-m03) Calling .GetIP
	I0805 23:17:42.637965   34500 main.go:141] libmachine: (ha-044175-m03) DBG | domain ha-044175-m03 has defined MAC address 52:54:00:f4:37:04 in network mk-ha-044175
	I0805 23:17:42.638401   34500 main.go:141] libmachine: (ha-044175-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f4:37:04", ip: ""} in network mk-ha-044175: {Iface:virbr1 ExpiryTime:2024-08-06 00:12:31 +0000 UTC Type:0 Mac:52:54:00:f4:37:04 Iaid: IPaddr:192.168.39.201 Prefix:24 Hostname:ha-044175-m03 Clientid:01:52:54:00:f4:37:04}
	I0805 23:17:42.638428   34500 main.go:141] libmachine: (ha-044175-m03) DBG | domain ha-044175-m03 has defined IP address 192.168.39.201 and MAC address 52:54:00:f4:37:04 in network mk-ha-044175
	I0805 23:17:42.638553   34500 host.go:66] Checking if "ha-044175-m03" exists ...
	I0805 23:17:42.638858   34500 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0805 23:17:42.638891   34500 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0805 23:17:42.654133   34500 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38453
	I0805 23:17:42.654594   34500 main.go:141] libmachine: () Calling .GetVersion
	I0805 23:17:42.655259   34500 main.go:141] libmachine: Using API Version  1
	I0805 23:17:42.655307   34500 main.go:141] libmachine: () Calling .SetConfigRaw
	I0805 23:17:42.655610   34500 main.go:141] libmachine: () Calling .GetMachineName
	I0805 23:17:42.655785   34500 main.go:141] libmachine: (ha-044175-m03) Calling .DriverName
	I0805 23:17:42.655975   34500 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0805 23:17:42.655997   34500 main.go:141] libmachine: (ha-044175-m03) Calling .GetSSHHostname
	I0805 23:17:42.659169   34500 main.go:141] libmachine: (ha-044175-m03) DBG | domain ha-044175-m03 has defined MAC address 52:54:00:f4:37:04 in network mk-ha-044175
	I0805 23:17:42.659614   34500 main.go:141] libmachine: (ha-044175-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f4:37:04", ip: ""} in network mk-ha-044175: {Iface:virbr1 ExpiryTime:2024-08-06 00:12:31 +0000 UTC Type:0 Mac:52:54:00:f4:37:04 Iaid: IPaddr:192.168.39.201 Prefix:24 Hostname:ha-044175-m03 Clientid:01:52:54:00:f4:37:04}
	I0805 23:17:42.659644   34500 main.go:141] libmachine: (ha-044175-m03) DBG | domain ha-044175-m03 has defined IP address 192.168.39.201 and MAC address 52:54:00:f4:37:04 in network mk-ha-044175
	I0805 23:17:42.659820   34500 main.go:141] libmachine: (ha-044175-m03) Calling .GetSSHPort
	I0805 23:17:42.659994   34500 main.go:141] libmachine: (ha-044175-m03) Calling .GetSSHKeyPath
	I0805 23:17:42.660161   34500 main.go:141] libmachine: (ha-044175-m03) Calling .GetSSHUsername
	I0805 23:17:42.660288   34500 sshutil.go:53] new ssh client: &{IP:192.168.39.201 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19373-9606/.minikube/machines/ha-044175-m03/id_rsa Username:docker}
	I0805 23:17:42.757455   34500 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0805 23:17:42.775039   34500 kubeconfig.go:125] found "ha-044175" server: "https://192.168.39.254:8443"
	I0805 23:17:42.775090   34500 api_server.go:166] Checking apiserver status ...
	I0805 23:17:42.775138   34500 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0805 23:17:42.792586   34500 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1566/cgroup
	W0805 23:17:42.805838   34500 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1566/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0805 23:17:42.805911   34500 ssh_runner.go:195] Run: ls
	I0805 23:17:42.811520   34500 api_server.go:253] Checking apiserver healthz at https://192.168.39.254:8443/healthz ...
	I0805 23:17:42.817545   34500 api_server.go:279] https://192.168.39.254:8443/healthz returned 200:
	ok
	I0805 23:17:42.817578   34500 status.go:422] ha-044175-m03 apiserver status = Running (err=<nil>)
	I0805 23:17:42.817590   34500 status.go:257] ha-044175-m03 status: &{Name:ha-044175-m03 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0805 23:17:42.817610   34500 status.go:255] checking status of ha-044175-m04 ...
	I0805 23:17:42.818040   34500 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0805 23:17:42.818118   34500 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0805 23:17:42.834538   34500 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40155
	I0805 23:17:42.835144   34500 main.go:141] libmachine: () Calling .GetVersion
	I0805 23:17:42.835643   34500 main.go:141] libmachine: Using API Version  1
	I0805 23:17:42.835668   34500 main.go:141] libmachine: () Calling .SetConfigRaw
	I0805 23:17:42.835934   34500 main.go:141] libmachine: () Calling .GetMachineName
	I0805 23:17:42.836118   34500 main.go:141] libmachine: (ha-044175-m04) Calling .GetState
	I0805 23:17:42.837546   34500 status.go:330] ha-044175-m04 host status = "Running" (err=<nil>)
	I0805 23:17:42.837561   34500 host.go:66] Checking if "ha-044175-m04" exists ...
	I0805 23:17:42.837835   34500 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0805 23:17:42.837876   34500 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0805 23:17:42.854054   34500 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43275
	I0805 23:17:42.854522   34500 main.go:141] libmachine: () Calling .GetVersion
	I0805 23:17:42.855018   34500 main.go:141] libmachine: Using API Version  1
	I0805 23:17:42.855043   34500 main.go:141] libmachine: () Calling .SetConfigRaw
	I0805 23:17:42.855405   34500 main.go:141] libmachine: () Calling .GetMachineName
	I0805 23:17:42.855653   34500 main.go:141] libmachine: (ha-044175-m04) Calling .GetIP
	I0805 23:17:42.859372   34500 main.go:141] libmachine: (ha-044175-m04) DBG | domain ha-044175-m04 has defined MAC address 52:54:00:e5:ba:4d in network mk-ha-044175
	I0805 23:17:42.859794   34500 main.go:141] libmachine: (ha-044175-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e5:ba:4d", ip: ""} in network mk-ha-044175: {Iface:virbr1 ExpiryTime:2024-08-06 00:13:59 +0000 UTC Type:0 Mac:52:54:00:e5:ba:4d Iaid: IPaddr:192.168.39.228 Prefix:24 Hostname:ha-044175-m04 Clientid:01:52:54:00:e5:ba:4d}
	I0805 23:17:42.859826   34500 main.go:141] libmachine: (ha-044175-m04) DBG | domain ha-044175-m04 has defined IP address 192.168.39.228 and MAC address 52:54:00:e5:ba:4d in network mk-ha-044175
	I0805 23:17:42.860202   34500 host.go:66] Checking if "ha-044175-m04" exists ...
	I0805 23:17:42.860635   34500 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0805 23:17:42.860676   34500 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0805 23:17:42.877789   34500 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:32789
	I0805 23:17:42.878201   34500 main.go:141] libmachine: () Calling .GetVersion
	I0805 23:17:42.878849   34500 main.go:141] libmachine: Using API Version  1
	I0805 23:17:42.878877   34500 main.go:141] libmachine: () Calling .SetConfigRaw
	I0805 23:17:42.879259   34500 main.go:141] libmachine: () Calling .GetMachineName
	I0805 23:17:42.879613   34500 main.go:141] libmachine: (ha-044175-m04) Calling .DriverName
	I0805 23:17:42.879797   34500 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0805 23:17:42.879819   34500 main.go:141] libmachine: (ha-044175-m04) Calling .GetSSHHostname
	I0805 23:17:42.883361   34500 main.go:141] libmachine: (ha-044175-m04) DBG | domain ha-044175-m04 has defined MAC address 52:54:00:e5:ba:4d in network mk-ha-044175
	I0805 23:17:42.883922   34500 main.go:141] libmachine: (ha-044175-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e5:ba:4d", ip: ""} in network mk-ha-044175: {Iface:virbr1 ExpiryTime:2024-08-06 00:13:59 +0000 UTC Type:0 Mac:52:54:00:e5:ba:4d Iaid: IPaddr:192.168.39.228 Prefix:24 Hostname:ha-044175-m04 Clientid:01:52:54:00:e5:ba:4d}
	I0805 23:17:42.883952   34500 main.go:141] libmachine: (ha-044175-m04) DBG | domain ha-044175-m04 has defined IP address 192.168.39.228 and MAC address 52:54:00:e5:ba:4d in network mk-ha-044175
	I0805 23:17:42.884143   34500 main.go:141] libmachine: (ha-044175-m04) Calling .GetSSHPort
	I0805 23:17:42.884335   34500 main.go:141] libmachine: (ha-044175-m04) Calling .GetSSHKeyPath
	I0805 23:17:42.884556   34500 main.go:141] libmachine: (ha-044175-m04) Calling .GetSSHUsername
	I0805 23:17:42.884706   34500 sshutil.go:53] new ssh client: &{IP:192.168.39.228 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19373-9606/.minikube/machines/ha-044175-m04/id_rsa Username:docker}
	I0805 23:17:42.966698   34500 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0805 23:17:42.984230   34500 status.go:257] ha-044175-m04 status: &{Name:ha-044175-m04 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
ha_test.go:428: (dbg) Run:  out/minikube-linux-amd64 -p ha-044175 status -v=7 --alsologtostderr
ha_test.go:428: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ha-044175 status -v=7 --alsologtostderr: exit status 3 (3.711490773s)

                                                
                                                
-- stdout --
	ha-044175
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-044175-m02
	type: Control Plane
	host: Error
	kubelet: Nonexistent
	apiserver: Nonexistent
	kubeconfig: Configured
	
	ha-044175-m03
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-044175-m04
	type: Worker
	host: Running
	kubelet: Running
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0805 23:17:47.449303   35037 out.go:291] Setting OutFile to fd 1 ...
	I0805 23:17:47.449410   35037 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0805 23:17:47.449418   35037 out.go:304] Setting ErrFile to fd 2...
	I0805 23:17:47.449422   35037 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0805 23:17:47.449631   35037 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19373-9606/.minikube/bin
	I0805 23:17:47.449784   35037 out.go:298] Setting JSON to false
	I0805 23:17:47.449806   35037 mustload.go:65] Loading cluster: ha-044175
	I0805 23:17:47.449844   35037 notify.go:220] Checking for updates...
	I0805 23:17:47.450184   35037 config.go:182] Loaded profile config "ha-044175": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0805 23:17:47.450197   35037 status.go:255] checking status of ha-044175 ...
	I0805 23:17:47.450595   35037 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0805 23:17:47.450637   35037 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0805 23:17:47.468713   35037 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45601
	I0805 23:17:47.469118   35037 main.go:141] libmachine: () Calling .GetVersion
	I0805 23:17:47.469751   35037 main.go:141] libmachine: Using API Version  1
	I0805 23:17:47.469789   35037 main.go:141] libmachine: () Calling .SetConfigRaw
	I0805 23:17:47.470133   35037 main.go:141] libmachine: () Calling .GetMachineName
	I0805 23:17:47.470339   35037 main.go:141] libmachine: (ha-044175) Calling .GetState
	I0805 23:17:47.472027   35037 status.go:330] ha-044175 host status = "Running" (err=<nil>)
	I0805 23:17:47.472048   35037 host.go:66] Checking if "ha-044175" exists ...
	I0805 23:17:47.472322   35037 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0805 23:17:47.472360   35037 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0805 23:17:47.486843   35037 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45809
	I0805 23:17:47.487331   35037 main.go:141] libmachine: () Calling .GetVersion
	I0805 23:17:47.487841   35037 main.go:141] libmachine: Using API Version  1
	I0805 23:17:47.487861   35037 main.go:141] libmachine: () Calling .SetConfigRaw
	I0805 23:17:47.488131   35037 main.go:141] libmachine: () Calling .GetMachineName
	I0805 23:17:47.488321   35037 main.go:141] libmachine: (ha-044175) Calling .GetIP
	I0805 23:17:47.491062   35037 main.go:141] libmachine: (ha-044175) DBG | domain ha-044175 has defined MAC address 52:54:00:d0:5f:e4 in network mk-ha-044175
	I0805 23:17:47.491495   35037 main.go:141] libmachine: (ha-044175) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d0:5f:e4", ip: ""} in network mk-ha-044175: {Iface:virbr1 ExpiryTime:2024-08-06 00:10:15 +0000 UTC Type:0 Mac:52:54:00:d0:5f:e4 Iaid: IPaddr:192.168.39.57 Prefix:24 Hostname:ha-044175 Clientid:01:52:54:00:d0:5f:e4}
	I0805 23:17:47.491526   35037 main.go:141] libmachine: (ha-044175) DBG | domain ha-044175 has defined IP address 192.168.39.57 and MAC address 52:54:00:d0:5f:e4 in network mk-ha-044175
	I0805 23:17:47.491691   35037 host.go:66] Checking if "ha-044175" exists ...
	I0805 23:17:47.492005   35037 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0805 23:17:47.492045   35037 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0805 23:17:47.506935   35037 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41867
	I0805 23:17:47.507346   35037 main.go:141] libmachine: () Calling .GetVersion
	I0805 23:17:47.507780   35037 main.go:141] libmachine: Using API Version  1
	I0805 23:17:47.507807   35037 main.go:141] libmachine: () Calling .SetConfigRaw
	I0805 23:17:47.508125   35037 main.go:141] libmachine: () Calling .GetMachineName
	I0805 23:17:47.508274   35037 main.go:141] libmachine: (ha-044175) Calling .DriverName
	I0805 23:17:47.508499   35037 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0805 23:17:47.508533   35037 main.go:141] libmachine: (ha-044175) Calling .GetSSHHostname
	I0805 23:17:47.511171   35037 main.go:141] libmachine: (ha-044175) DBG | domain ha-044175 has defined MAC address 52:54:00:d0:5f:e4 in network mk-ha-044175
	I0805 23:17:47.511539   35037 main.go:141] libmachine: (ha-044175) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d0:5f:e4", ip: ""} in network mk-ha-044175: {Iface:virbr1 ExpiryTime:2024-08-06 00:10:15 +0000 UTC Type:0 Mac:52:54:00:d0:5f:e4 Iaid: IPaddr:192.168.39.57 Prefix:24 Hostname:ha-044175 Clientid:01:52:54:00:d0:5f:e4}
	I0805 23:17:47.511575   35037 main.go:141] libmachine: (ha-044175) DBG | domain ha-044175 has defined IP address 192.168.39.57 and MAC address 52:54:00:d0:5f:e4 in network mk-ha-044175
	I0805 23:17:47.511687   35037 main.go:141] libmachine: (ha-044175) Calling .GetSSHPort
	I0805 23:17:47.511853   35037 main.go:141] libmachine: (ha-044175) Calling .GetSSHKeyPath
	I0805 23:17:47.512006   35037 main.go:141] libmachine: (ha-044175) Calling .GetSSHUsername
	I0805 23:17:47.512136   35037 sshutil.go:53] new ssh client: &{IP:192.168.39.57 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19373-9606/.minikube/machines/ha-044175/id_rsa Username:docker}
	I0805 23:17:47.590858   35037 ssh_runner.go:195] Run: systemctl --version
	I0805 23:17:47.598341   35037 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0805 23:17:47.614490   35037 kubeconfig.go:125] found "ha-044175" server: "https://192.168.39.254:8443"
	I0805 23:17:47.614521   35037 api_server.go:166] Checking apiserver status ...
	I0805 23:17:47.614566   35037 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0805 23:17:47.628630   35037 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1201/cgroup
	W0805 23:17:47.638025   35037 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1201/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0805 23:17:47.638071   35037 ssh_runner.go:195] Run: ls
	I0805 23:17:47.642964   35037 api_server.go:253] Checking apiserver healthz at https://192.168.39.254:8443/healthz ...
	I0805 23:17:47.647357   35037 api_server.go:279] https://192.168.39.254:8443/healthz returned 200:
	ok
	I0805 23:17:47.647379   35037 status.go:422] ha-044175 apiserver status = Running (err=<nil>)
	I0805 23:17:47.647388   35037 status.go:257] ha-044175 status: &{Name:ha-044175 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0805 23:17:47.647401   35037 status.go:255] checking status of ha-044175-m02 ...
	I0805 23:17:47.647733   35037 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0805 23:17:47.647771   35037 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0805 23:17:47.663280   35037 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36191
	I0805 23:17:47.663733   35037 main.go:141] libmachine: () Calling .GetVersion
	I0805 23:17:47.664194   35037 main.go:141] libmachine: Using API Version  1
	I0805 23:17:47.664218   35037 main.go:141] libmachine: () Calling .SetConfigRaw
	I0805 23:17:47.664527   35037 main.go:141] libmachine: () Calling .GetMachineName
	I0805 23:17:47.664727   35037 main.go:141] libmachine: (ha-044175-m02) Calling .GetState
	I0805 23:17:47.666266   35037 status.go:330] ha-044175-m02 host status = "Running" (err=<nil>)
	I0805 23:17:47.666279   35037 host.go:66] Checking if "ha-044175-m02" exists ...
	I0805 23:17:47.666687   35037 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0805 23:17:47.666734   35037 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0805 23:17:47.681610   35037 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38613
	I0805 23:17:47.682048   35037 main.go:141] libmachine: () Calling .GetVersion
	I0805 23:17:47.682503   35037 main.go:141] libmachine: Using API Version  1
	I0805 23:17:47.682531   35037 main.go:141] libmachine: () Calling .SetConfigRaw
	I0805 23:17:47.682815   35037 main.go:141] libmachine: () Calling .GetMachineName
	I0805 23:17:47.682984   35037 main.go:141] libmachine: (ha-044175-m02) Calling .GetIP
	I0805 23:17:47.685841   35037 main.go:141] libmachine: (ha-044175-m02) DBG | domain ha-044175-m02 has defined MAC address 52:54:00:84:bb:47 in network mk-ha-044175
	I0805 23:17:47.686276   35037 main.go:141] libmachine: (ha-044175-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:84:bb:47", ip: ""} in network mk-ha-044175: {Iface:virbr1 ExpiryTime:2024-08-06 00:11:13 +0000 UTC Type:0 Mac:52:54:00:84:bb:47 Iaid: IPaddr:192.168.39.112 Prefix:24 Hostname:ha-044175-m02 Clientid:01:52:54:00:84:bb:47}
	I0805 23:17:47.686301   35037 main.go:141] libmachine: (ha-044175-m02) DBG | domain ha-044175-m02 has defined IP address 192.168.39.112 and MAC address 52:54:00:84:bb:47 in network mk-ha-044175
	I0805 23:17:47.686442   35037 host.go:66] Checking if "ha-044175-m02" exists ...
	I0805 23:17:47.686725   35037 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0805 23:17:47.686763   35037 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0805 23:17:47.701579   35037 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43807
	I0805 23:17:47.701920   35037 main.go:141] libmachine: () Calling .GetVersion
	I0805 23:17:47.702296   35037 main.go:141] libmachine: Using API Version  1
	I0805 23:17:47.702313   35037 main.go:141] libmachine: () Calling .SetConfigRaw
	I0805 23:17:47.702599   35037 main.go:141] libmachine: () Calling .GetMachineName
	I0805 23:17:47.702746   35037 main.go:141] libmachine: (ha-044175-m02) Calling .DriverName
	I0805 23:17:47.702909   35037 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0805 23:17:47.702927   35037 main.go:141] libmachine: (ha-044175-m02) Calling .GetSSHHostname
	I0805 23:17:47.705416   35037 main.go:141] libmachine: (ha-044175-m02) DBG | domain ha-044175-m02 has defined MAC address 52:54:00:84:bb:47 in network mk-ha-044175
	I0805 23:17:47.705792   35037 main.go:141] libmachine: (ha-044175-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:84:bb:47", ip: ""} in network mk-ha-044175: {Iface:virbr1 ExpiryTime:2024-08-06 00:11:13 +0000 UTC Type:0 Mac:52:54:00:84:bb:47 Iaid: IPaddr:192.168.39.112 Prefix:24 Hostname:ha-044175-m02 Clientid:01:52:54:00:84:bb:47}
	I0805 23:17:47.705813   35037 main.go:141] libmachine: (ha-044175-m02) DBG | domain ha-044175-m02 has defined IP address 192.168.39.112 and MAC address 52:54:00:84:bb:47 in network mk-ha-044175
	I0805 23:17:47.705951   35037 main.go:141] libmachine: (ha-044175-m02) Calling .GetSSHPort
	I0805 23:17:47.706129   35037 main.go:141] libmachine: (ha-044175-m02) Calling .GetSSHKeyPath
	I0805 23:17:47.706285   35037 main.go:141] libmachine: (ha-044175-m02) Calling .GetSSHUsername
	I0805 23:17:47.706433   35037 sshutil.go:53] new ssh client: &{IP:192.168.39.112 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19373-9606/.minikube/machines/ha-044175-m02/id_rsa Username:docker}
	W0805 23:17:50.759303   35037 sshutil.go:64] dial failure (will retry): dial tcp 192.168.39.112:22: connect: no route to host
	W0805 23:17:50.759397   35037 start.go:268] error running df -h /var: NewSession: new client: new client: dial tcp 192.168.39.112:22: connect: no route to host
	E0805 23:17:50.759413   35037 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.39.112:22: connect: no route to host
	I0805 23:17:50.759431   35037 status.go:257] ha-044175-m02 status: &{Name:ha-044175-m02 Host:Error Kubelet:Nonexistent APIServer:Nonexistent Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	E0805 23:17:50.759450   35037 status.go:260] status error: NewSession: new client: new client: dial tcp 192.168.39.112:22: connect: no route to host
	I0805 23:17:50.759457   35037 status.go:255] checking status of ha-044175-m03 ...
	I0805 23:17:50.759737   35037 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0805 23:17:50.759773   35037 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0805 23:17:50.776720   35037 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33261
	I0805 23:17:50.777148   35037 main.go:141] libmachine: () Calling .GetVersion
	I0805 23:17:50.777678   35037 main.go:141] libmachine: Using API Version  1
	I0805 23:17:50.777698   35037 main.go:141] libmachine: () Calling .SetConfigRaw
	I0805 23:17:50.778063   35037 main.go:141] libmachine: () Calling .GetMachineName
	I0805 23:17:50.778275   35037 main.go:141] libmachine: (ha-044175-m03) Calling .GetState
	I0805 23:17:50.780143   35037 status.go:330] ha-044175-m03 host status = "Running" (err=<nil>)
	I0805 23:17:50.780158   35037 host.go:66] Checking if "ha-044175-m03" exists ...
	I0805 23:17:50.780496   35037 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0805 23:17:50.780532   35037 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0805 23:17:50.795106   35037 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45043
	I0805 23:17:50.795555   35037 main.go:141] libmachine: () Calling .GetVersion
	I0805 23:17:50.796114   35037 main.go:141] libmachine: Using API Version  1
	I0805 23:17:50.796133   35037 main.go:141] libmachine: () Calling .SetConfigRaw
	I0805 23:17:50.796481   35037 main.go:141] libmachine: () Calling .GetMachineName
	I0805 23:17:50.796656   35037 main.go:141] libmachine: (ha-044175-m03) Calling .GetIP
	I0805 23:17:50.799802   35037 main.go:141] libmachine: (ha-044175-m03) DBG | domain ha-044175-m03 has defined MAC address 52:54:00:f4:37:04 in network mk-ha-044175
	I0805 23:17:50.800263   35037 main.go:141] libmachine: (ha-044175-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f4:37:04", ip: ""} in network mk-ha-044175: {Iface:virbr1 ExpiryTime:2024-08-06 00:12:31 +0000 UTC Type:0 Mac:52:54:00:f4:37:04 Iaid: IPaddr:192.168.39.201 Prefix:24 Hostname:ha-044175-m03 Clientid:01:52:54:00:f4:37:04}
	I0805 23:17:50.800289   35037 main.go:141] libmachine: (ha-044175-m03) DBG | domain ha-044175-m03 has defined IP address 192.168.39.201 and MAC address 52:54:00:f4:37:04 in network mk-ha-044175
	I0805 23:17:50.800428   35037 host.go:66] Checking if "ha-044175-m03" exists ...
	I0805 23:17:50.800742   35037 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0805 23:17:50.800775   35037 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0805 23:17:50.816957   35037 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36989
	I0805 23:17:50.817307   35037 main.go:141] libmachine: () Calling .GetVersion
	I0805 23:17:50.817745   35037 main.go:141] libmachine: Using API Version  1
	I0805 23:17:50.817766   35037 main.go:141] libmachine: () Calling .SetConfigRaw
	I0805 23:17:50.818045   35037 main.go:141] libmachine: () Calling .GetMachineName
	I0805 23:17:50.818252   35037 main.go:141] libmachine: (ha-044175-m03) Calling .DriverName
	I0805 23:17:50.818458   35037 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0805 23:17:50.818482   35037 main.go:141] libmachine: (ha-044175-m03) Calling .GetSSHHostname
	I0805 23:17:50.821455   35037 main.go:141] libmachine: (ha-044175-m03) DBG | domain ha-044175-m03 has defined MAC address 52:54:00:f4:37:04 in network mk-ha-044175
	I0805 23:17:50.821874   35037 main.go:141] libmachine: (ha-044175-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f4:37:04", ip: ""} in network mk-ha-044175: {Iface:virbr1 ExpiryTime:2024-08-06 00:12:31 +0000 UTC Type:0 Mac:52:54:00:f4:37:04 Iaid: IPaddr:192.168.39.201 Prefix:24 Hostname:ha-044175-m03 Clientid:01:52:54:00:f4:37:04}
	I0805 23:17:50.821901   35037 main.go:141] libmachine: (ha-044175-m03) DBG | domain ha-044175-m03 has defined IP address 192.168.39.201 and MAC address 52:54:00:f4:37:04 in network mk-ha-044175
	I0805 23:17:50.822020   35037 main.go:141] libmachine: (ha-044175-m03) Calling .GetSSHPort
	I0805 23:17:50.822220   35037 main.go:141] libmachine: (ha-044175-m03) Calling .GetSSHKeyPath
	I0805 23:17:50.822355   35037 main.go:141] libmachine: (ha-044175-m03) Calling .GetSSHUsername
	I0805 23:17:50.822487   35037 sshutil.go:53] new ssh client: &{IP:192.168.39.201 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19373-9606/.minikube/machines/ha-044175-m03/id_rsa Username:docker}
	I0805 23:17:50.907879   35037 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0805 23:17:50.928637   35037 kubeconfig.go:125] found "ha-044175" server: "https://192.168.39.254:8443"
	I0805 23:17:50.928665   35037 api_server.go:166] Checking apiserver status ...
	I0805 23:17:50.928697   35037 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0805 23:17:50.943242   35037 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1566/cgroup
	W0805 23:17:50.953934   35037 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1566/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0805 23:17:50.953990   35037 ssh_runner.go:195] Run: ls
	I0805 23:17:50.958809   35037 api_server.go:253] Checking apiserver healthz at https://192.168.39.254:8443/healthz ...
	I0805 23:17:50.963256   35037 api_server.go:279] https://192.168.39.254:8443/healthz returned 200:
	ok
	I0805 23:17:50.963290   35037 status.go:422] ha-044175-m03 apiserver status = Running (err=<nil>)
	I0805 23:17:50.963302   35037 status.go:257] ha-044175-m03 status: &{Name:ha-044175-m03 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0805 23:17:50.963316   35037 status.go:255] checking status of ha-044175-m04 ...
	I0805 23:17:50.963619   35037 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0805 23:17:50.963658   35037 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0805 23:17:50.978215   35037 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40519
	I0805 23:17:50.978680   35037 main.go:141] libmachine: () Calling .GetVersion
	I0805 23:17:50.979165   35037 main.go:141] libmachine: Using API Version  1
	I0805 23:17:50.979187   35037 main.go:141] libmachine: () Calling .SetConfigRaw
	I0805 23:17:50.979479   35037 main.go:141] libmachine: () Calling .GetMachineName
	I0805 23:17:50.979638   35037 main.go:141] libmachine: (ha-044175-m04) Calling .GetState
	I0805 23:17:50.981189   35037 status.go:330] ha-044175-m04 host status = "Running" (err=<nil>)
	I0805 23:17:50.981204   35037 host.go:66] Checking if "ha-044175-m04" exists ...
	I0805 23:17:50.981467   35037 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0805 23:17:50.981502   35037 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0805 23:17:50.995837   35037 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41253
	I0805 23:17:50.996218   35037 main.go:141] libmachine: () Calling .GetVersion
	I0805 23:17:50.996642   35037 main.go:141] libmachine: Using API Version  1
	I0805 23:17:50.996658   35037 main.go:141] libmachine: () Calling .SetConfigRaw
	I0805 23:17:50.996999   35037 main.go:141] libmachine: () Calling .GetMachineName
	I0805 23:17:50.997205   35037 main.go:141] libmachine: (ha-044175-m04) Calling .GetIP
	I0805 23:17:51.000379   35037 main.go:141] libmachine: (ha-044175-m04) DBG | domain ha-044175-m04 has defined MAC address 52:54:00:e5:ba:4d in network mk-ha-044175
	I0805 23:17:51.000764   35037 main.go:141] libmachine: (ha-044175-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e5:ba:4d", ip: ""} in network mk-ha-044175: {Iface:virbr1 ExpiryTime:2024-08-06 00:13:59 +0000 UTC Type:0 Mac:52:54:00:e5:ba:4d Iaid: IPaddr:192.168.39.228 Prefix:24 Hostname:ha-044175-m04 Clientid:01:52:54:00:e5:ba:4d}
	I0805 23:17:51.000794   35037 main.go:141] libmachine: (ha-044175-m04) DBG | domain ha-044175-m04 has defined IP address 192.168.39.228 and MAC address 52:54:00:e5:ba:4d in network mk-ha-044175
	I0805 23:17:51.000927   35037 host.go:66] Checking if "ha-044175-m04" exists ...
	I0805 23:17:51.001310   35037 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0805 23:17:51.001348   35037 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0805 23:17:51.016821   35037 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34009
	I0805 23:17:51.017260   35037 main.go:141] libmachine: () Calling .GetVersion
	I0805 23:17:51.017754   35037 main.go:141] libmachine: Using API Version  1
	I0805 23:17:51.017776   35037 main.go:141] libmachine: () Calling .SetConfigRaw
	I0805 23:17:51.018092   35037 main.go:141] libmachine: () Calling .GetMachineName
	I0805 23:17:51.018287   35037 main.go:141] libmachine: (ha-044175-m04) Calling .DriverName
	I0805 23:17:51.018502   35037 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0805 23:17:51.018524   35037 main.go:141] libmachine: (ha-044175-m04) Calling .GetSSHHostname
	I0805 23:17:51.021248   35037 main.go:141] libmachine: (ha-044175-m04) DBG | domain ha-044175-m04 has defined MAC address 52:54:00:e5:ba:4d in network mk-ha-044175
	I0805 23:17:51.021816   35037 main.go:141] libmachine: (ha-044175-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e5:ba:4d", ip: ""} in network mk-ha-044175: {Iface:virbr1 ExpiryTime:2024-08-06 00:13:59 +0000 UTC Type:0 Mac:52:54:00:e5:ba:4d Iaid: IPaddr:192.168.39.228 Prefix:24 Hostname:ha-044175-m04 Clientid:01:52:54:00:e5:ba:4d}
	I0805 23:17:51.021843   35037 main.go:141] libmachine: (ha-044175-m04) DBG | domain ha-044175-m04 has defined IP address 192.168.39.228 and MAC address 52:54:00:e5:ba:4d in network mk-ha-044175
	I0805 23:17:51.021984   35037 main.go:141] libmachine: (ha-044175-m04) Calling .GetSSHPort
	I0805 23:17:51.022145   35037 main.go:141] libmachine: (ha-044175-m04) Calling .GetSSHKeyPath
	I0805 23:17:51.022314   35037 main.go:141] libmachine: (ha-044175-m04) Calling .GetSSHUsername
	I0805 23:17:51.022466   35037 sshutil.go:53] new ssh client: &{IP:192.168.39.228 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19373-9606/.minikube/machines/ha-044175-m04/id_rsa Username:docker}
	I0805 23:17:51.103272   35037 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0805 23:17:51.118813   35037 status.go:257] ha-044175-m04 status: &{Name:ha-044175-m04 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
ha_test.go:428: (dbg) Run:  out/minikube-linux-amd64 -p ha-044175 status -v=7 --alsologtostderr
ha_test.go:428: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ha-044175 status -v=7 --alsologtostderr: exit status 3 (3.734389712s)

                                                
                                                
-- stdout --
	ha-044175
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-044175-m02
	type: Control Plane
	host: Error
	kubelet: Nonexistent
	apiserver: Nonexistent
	kubeconfig: Configured
	
	ha-044175-m03
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-044175-m04
	type: Worker
	host: Running
	kubelet: Running
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0805 23:17:55.745438   35154 out.go:291] Setting OutFile to fd 1 ...
	I0805 23:17:55.745565   35154 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0805 23:17:55.745574   35154 out.go:304] Setting ErrFile to fd 2...
	I0805 23:17:55.745578   35154 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0805 23:17:55.745770   35154 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19373-9606/.minikube/bin
	I0805 23:17:55.745975   35154 out.go:298] Setting JSON to false
	I0805 23:17:55.746000   35154 mustload.go:65] Loading cluster: ha-044175
	I0805 23:17:55.746107   35154 notify.go:220] Checking for updates...
	I0805 23:17:55.746437   35154 config.go:182] Loaded profile config "ha-044175": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0805 23:17:55.746451   35154 status.go:255] checking status of ha-044175 ...
	I0805 23:17:55.746899   35154 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0805 23:17:55.746960   35154 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0805 23:17:55.765975   35154 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44321
	I0805 23:17:55.766419   35154 main.go:141] libmachine: () Calling .GetVersion
	I0805 23:17:55.766998   35154 main.go:141] libmachine: Using API Version  1
	I0805 23:17:55.767022   35154 main.go:141] libmachine: () Calling .SetConfigRaw
	I0805 23:17:55.767441   35154 main.go:141] libmachine: () Calling .GetMachineName
	I0805 23:17:55.767644   35154 main.go:141] libmachine: (ha-044175) Calling .GetState
	I0805 23:17:55.769324   35154 status.go:330] ha-044175 host status = "Running" (err=<nil>)
	I0805 23:17:55.769355   35154 host.go:66] Checking if "ha-044175" exists ...
	I0805 23:17:55.769794   35154 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0805 23:17:55.769872   35154 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0805 23:17:55.784449   35154 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45863
	I0805 23:17:55.784853   35154 main.go:141] libmachine: () Calling .GetVersion
	I0805 23:17:55.785378   35154 main.go:141] libmachine: Using API Version  1
	I0805 23:17:55.785405   35154 main.go:141] libmachine: () Calling .SetConfigRaw
	I0805 23:17:55.785708   35154 main.go:141] libmachine: () Calling .GetMachineName
	I0805 23:17:55.785890   35154 main.go:141] libmachine: (ha-044175) Calling .GetIP
	I0805 23:17:55.788779   35154 main.go:141] libmachine: (ha-044175) DBG | domain ha-044175 has defined MAC address 52:54:00:d0:5f:e4 in network mk-ha-044175
	I0805 23:17:55.789226   35154 main.go:141] libmachine: (ha-044175) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d0:5f:e4", ip: ""} in network mk-ha-044175: {Iface:virbr1 ExpiryTime:2024-08-06 00:10:15 +0000 UTC Type:0 Mac:52:54:00:d0:5f:e4 Iaid: IPaddr:192.168.39.57 Prefix:24 Hostname:ha-044175 Clientid:01:52:54:00:d0:5f:e4}
	I0805 23:17:55.789249   35154 main.go:141] libmachine: (ha-044175) DBG | domain ha-044175 has defined IP address 192.168.39.57 and MAC address 52:54:00:d0:5f:e4 in network mk-ha-044175
	I0805 23:17:55.789422   35154 host.go:66] Checking if "ha-044175" exists ...
	I0805 23:17:55.789712   35154 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0805 23:17:55.789744   35154 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0805 23:17:55.805106   35154 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37345
	I0805 23:17:55.805501   35154 main.go:141] libmachine: () Calling .GetVersion
	I0805 23:17:55.805990   35154 main.go:141] libmachine: Using API Version  1
	I0805 23:17:55.806014   35154 main.go:141] libmachine: () Calling .SetConfigRaw
	I0805 23:17:55.806334   35154 main.go:141] libmachine: () Calling .GetMachineName
	I0805 23:17:55.806518   35154 main.go:141] libmachine: (ha-044175) Calling .DriverName
	I0805 23:17:55.806767   35154 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0805 23:17:55.806807   35154 main.go:141] libmachine: (ha-044175) Calling .GetSSHHostname
	I0805 23:17:55.809653   35154 main.go:141] libmachine: (ha-044175) DBG | domain ha-044175 has defined MAC address 52:54:00:d0:5f:e4 in network mk-ha-044175
	I0805 23:17:55.810074   35154 main.go:141] libmachine: (ha-044175) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d0:5f:e4", ip: ""} in network mk-ha-044175: {Iface:virbr1 ExpiryTime:2024-08-06 00:10:15 +0000 UTC Type:0 Mac:52:54:00:d0:5f:e4 Iaid: IPaddr:192.168.39.57 Prefix:24 Hostname:ha-044175 Clientid:01:52:54:00:d0:5f:e4}
	I0805 23:17:55.810097   35154 main.go:141] libmachine: (ha-044175) DBG | domain ha-044175 has defined IP address 192.168.39.57 and MAC address 52:54:00:d0:5f:e4 in network mk-ha-044175
	I0805 23:17:55.810211   35154 main.go:141] libmachine: (ha-044175) Calling .GetSSHPort
	I0805 23:17:55.810426   35154 main.go:141] libmachine: (ha-044175) Calling .GetSSHKeyPath
	I0805 23:17:55.810575   35154 main.go:141] libmachine: (ha-044175) Calling .GetSSHUsername
	I0805 23:17:55.810720   35154 sshutil.go:53] new ssh client: &{IP:192.168.39.57 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19373-9606/.minikube/machines/ha-044175/id_rsa Username:docker}
	I0805 23:17:55.887735   35154 ssh_runner.go:195] Run: systemctl --version
	I0805 23:17:55.895466   35154 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0805 23:17:55.911427   35154 kubeconfig.go:125] found "ha-044175" server: "https://192.168.39.254:8443"
	I0805 23:17:55.911455   35154 api_server.go:166] Checking apiserver status ...
	I0805 23:17:55.911497   35154 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0805 23:17:55.926625   35154 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1201/cgroup
	W0805 23:17:55.936765   35154 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1201/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0805 23:17:55.936815   35154 ssh_runner.go:195] Run: ls
	I0805 23:17:55.941322   35154 api_server.go:253] Checking apiserver healthz at https://192.168.39.254:8443/healthz ...
	I0805 23:17:55.947166   35154 api_server.go:279] https://192.168.39.254:8443/healthz returned 200:
	ok
	I0805 23:17:55.947189   35154 status.go:422] ha-044175 apiserver status = Running (err=<nil>)
	I0805 23:17:55.947198   35154 status.go:257] ha-044175 status: &{Name:ha-044175 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0805 23:17:55.947217   35154 status.go:255] checking status of ha-044175-m02 ...
	I0805 23:17:55.947507   35154 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0805 23:17:55.947538   35154 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0805 23:17:55.962659   35154 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41519
	I0805 23:17:55.963062   35154 main.go:141] libmachine: () Calling .GetVersion
	I0805 23:17:55.963461   35154 main.go:141] libmachine: Using API Version  1
	I0805 23:17:55.963481   35154 main.go:141] libmachine: () Calling .SetConfigRaw
	I0805 23:17:55.963757   35154 main.go:141] libmachine: () Calling .GetMachineName
	I0805 23:17:55.963941   35154 main.go:141] libmachine: (ha-044175-m02) Calling .GetState
	I0805 23:17:55.965475   35154 status.go:330] ha-044175-m02 host status = "Running" (err=<nil>)
	I0805 23:17:55.965491   35154 host.go:66] Checking if "ha-044175-m02" exists ...
	I0805 23:17:55.965785   35154 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0805 23:17:55.965852   35154 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0805 23:17:55.980577   35154 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40509
	I0805 23:17:55.980985   35154 main.go:141] libmachine: () Calling .GetVersion
	I0805 23:17:55.981386   35154 main.go:141] libmachine: Using API Version  1
	I0805 23:17:55.981400   35154 main.go:141] libmachine: () Calling .SetConfigRaw
	I0805 23:17:55.981723   35154 main.go:141] libmachine: () Calling .GetMachineName
	I0805 23:17:55.981933   35154 main.go:141] libmachine: (ha-044175-m02) Calling .GetIP
	I0805 23:17:55.984718   35154 main.go:141] libmachine: (ha-044175-m02) DBG | domain ha-044175-m02 has defined MAC address 52:54:00:84:bb:47 in network mk-ha-044175
	I0805 23:17:55.985277   35154 main.go:141] libmachine: (ha-044175-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:84:bb:47", ip: ""} in network mk-ha-044175: {Iface:virbr1 ExpiryTime:2024-08-06 00:11:13 +0000 UTC Type:0 Mac:52:54:00:84:bb:47 Iaid: IPaddr:192.168.39.112 Prefix:24 Hostname:ha-044175-m02 Clientid:01:52:54:00:84:bb:47}
	I0805 23:17:55.985312   35154 main.go:141] libmachine: (ha-044175-m02) DBG | domain ha-044175-m02 has defined IP address 192.168.39.112 and MAC address 52:54:00:84:bb:47 in network mk-ha-044175
	I0805 23:17:55.985458   35154 host.go:66] Checking if "ha-044175-m02" exists ...
	I0805 23:17:55.985749   35154 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0805 23:17:55.985787   35154 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0805 23:17:56.000558   35154 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40525
	I0805 23:17:56.000941   35154 main.go:141] libmachine: () Calling .GetVersion
	I0805 23:17:56.001394   35154 main.go:141] libmachine: Using API Version  1
	I0805 23:17:56.001418   35154 main.go:141] libmachine: () Calling .SetConfigRaw
	I0805 23:17:56.001717   35154 main.go:141] libmachine: () Calling .GetMachineName
	I0805 23:17:56.001899   35154 main.go:141] libmachine: (ha-044175-m02) Calling .DriverName
	I0805 23:17:56.002192   35154 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0805 23:17:56.002211   35154 main.go:141] libmachine: (ha-044175-m02) Calling .GetSSHHostname
	I0805 23:17:56.004585   35154 main.go:141] libmachine: (ha-044175-m02) DBG | domain ha-044175-m02 has defined MAC address 52:54:00:84:bb:47 in network mk-ha-044175
	I0805 23:17:56.005053   35154 main.go:141] libmachine: (ha-044175-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:84:bb:47", ip: ""} in network mk-ha-044175: {Iface:virbr1 ExpiryTime:2024-08-06 00:11:13 +0000 UTC Type:0 Mac:52:54:00:84:bb:47 Iaid: IPaddr:192.168.39.112 Prefix:24 Hostname:ha-044175-m02 Clientid:01:52:54:00:84:bb:47}
	I0805 23:17:56.005087   35154 main.go:141] libmachine: (ha-044175-m02) DBG | domain ha-044175-m02 has defined IP address 192.168.39.112 and MAC address 52:54:00:84:bb:47 in network mk-ha-044175
	I0805 23:17:56.005255   35154 main.go:141] libmachine: (ha-044175-m02) Calling .GetSSHPort
	I0805 23:17:56.005410   35154 main.go:141] libmachine: (ha-044175-m02) Calling .GetSSHKeyPath
	I0805 23:17:56.005559   35154 main.go:141] libmachine: (ha-044175-m02) Calling .GetSSHUsername
	I0805 23:17:56.005695   35154 sshutil.go:53] new ssh client: &{IP:192.168.39.112 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19373-9606/.minikube/machines/ha-044175-m02/id_rsa Username:docker}
	W0805 23:17:59.079363   35154 sshutil.go:64] dial failure (will retry): dial tcp 192.168.39.112:22: connect: no route to host
	W0805 23:17:59.079438   35154 start.go:268] error running df -h /var: NewSession: new client: new client: dial tcp 192.168.39.112:22: connect: no route to host
	E0805 23:17:59.079451   35154 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.39.112:22: connect: no route to host
	I0805 23:17:59.079460   35154 status.go:257] ha-044175-m02 status: &{Name:ha-044175-m02 Host:Error Kubelet:Nonexistent APIServer:Nonexistent Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	E0805 23:17:59.079490   35154 status.go:260] status error: NewSession: new client: new client: dial tcp 192.168.39.112:22: connect: no route to host
	I0805 23:17:59.079497   35154 status.go:255] checking status of ha-044175-m03 ...
	I0805 23:17:59.079865   35154 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0805 23:17:59.079909   35154 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0805 23:17:59.094672   35154 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39501
	I0805 23:17:59.095182   35154 main.go:141] libmachine: () Calling .GetVersion
	I0805 23:17:59.095626   35154 main.go:141] libmachine: Using API Version  1
	I0805 23:17:59.095651   35154 main.go:141] libmachine: () Calling .SetConfigRaw
	I0805 23:17:59.095989   35154 main.go:141] libmachine: () Calling .GetMachineName
	I0805 23:17:59.096146   35154 main.go:141] libmachine: (ha-044175-m03) Calling .GetState
	I0805 23:17:59.098145   35154 status.go:330] ha-044175-m03 host status = "Running" (err=<nil>)
	I0805 23:17:59.098164   35154 host.go:66] Checking if "ha-044175-m03" exists ...
	I0805 23:17:59.098553   35154 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0805 23:17:59.098603   35154 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0805 23:17:59.114027   35154 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46237
	I0805 23:17:59.114442   35154 main.go:141] libmachine: () Calling .GetVersion
	I0805 23:17:59.114879   35154 main.go:141] libmachine: Using API Version  1
	I0805 23:17:59.114897   35154 main.go:141] libmachine: () Calling .SetConfigRaw
	I0805 23:17:59.115289   35154 main.go:141] libmachine: () Calling .GetMachineName
	I0805 23:17:59.115474   35154 main.go:141] libmachine: (ha-044175-m03) Calling .GetIP
	I0805 23:17:59.118756   35154 main.go:141] libmachine: (ha-044175-m03) DBG | domain ha-044175-m03 has defined MAC address 52:54:00:f4:37:04 in network mk-ha-044175
	I0805 23:17:59.119310   35154 main.go:141] libmachine: (ha-044175-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f4:37:04", ip: ""} in network mk-ha-044175: {Iface:virbr1 ExpiryTime:2024-08-06 00:12:31 +0000 UTC Type:0 Mac:52:54:00:f4:37:04 Iaid: IPaddr:192.168.39.201 Prefix:24 Hostname:ha-044175-m03 Clientid:01:52:54:00:f4:37:04}
	I0805 23:17:59.119332   35154 main.go:141] libmachine: (ha-044175-m03) DBG | domain ha-044175-m03 has defined IP address 192.168.39.201 and MAC address 52:54:00:f4:37:04 in network mk-ha-044175
	I0805 23:17:59.119484   35154 host.go:66] Checking if "ha-044175-m03" exists ...
	I0805 23:17:59.119915   35154 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0805 23:17:59.119964   35154 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0805 23:17:59.134740   35154 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35475
	I0805 23:17:59.135205   35154 main.go:141] libmachine: () Calling .GetVersion
	I0805 23:17:59.135674   35154 main.go:141] libmachine: Using API Version  1
	I0805 23:17:59.135695   35154 main.go:141] libmachine: () Calling .SetConfigRaw
	I0805 23:17:59.135967   35154 main.go:141] libmachine: () Calling .GetMachineName
	I0805 23:17:59.136110   35154 main.go:141] libmachine: (ha-044175-m03) Calling .DriverName
	I0805 23:17:59.136311   35154 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0805 23:17:59.136338   35154 main.go:141] libmachine: (ha-044175-m03) Calling .GetSSHHostname
	I0805 23:17:59.139160   35154 main.go:141] libmachine: (ha-044175-m03) DBG | domain ha-044175-m03 has defined MAC address 52:54:00:f4:37:04 in network mk-ha-044175
	I0805 23:17:59.139624   35154 main.go:141] libmachine: (ha-044175-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f4:37:04", ip: ""} in network mk-ha-044175: {Iface:virbr1 ExpiryTime:2024-08-06 00:12:31 +0000 UTC Type:0 Mac:52:54:00:f4:37:04 Iaid: IPaddr:192.168.39.201 Prefix:24 Hostname:ha-044175-m03 Clientid:01:52:54:00:f4:37:04}
	I0805 23:17:59.139653   35154 main.go:141] libmachine: (ha-044175-m03) DBG | domain ha-044175-m03 has defined IP address 192.168.39.201 and MAC address 52:54:00:f4:37:04 in network mk-ha-044175
	I0805 23:17:59.139948   35154 main.go:141] libmachine: (ha-044175-m03) Calling .GetSSHPort
	I0805 23:17:59.140144   35154 main.go:141] libmachine: (ha-044175-m03) Calling .GetSSHKeyPath
	I0805 23:17:59.140420   35154 main.go:141] libmachine: (ha-044175-m03) Calling .GetSSHUsername
	I0805 23:17:59.140576   35154 sshutil.go:53] new ssh client: &{IP:192.168.39.201 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19373-9606/.minikube/machines/ha-044175-m03/id_rsa Username:docker}
	I0805 23:17:59.227937   35154 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0805 23:17:59.246900   35154 kubeconfig.go:125] found "ha-044175" server: "https://192.168.39.254:8443"
	I0805 23:17:59.246926   35154 api_server.go:166] Checking apiserver status ...
	I0805 23:17:59.246958   35154 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0805 23:17:59.261738   35154 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1566/cgroup
	W0805 23:17:59.271926   35154 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1566/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0805 23:17:59.271989   35154 ssh_runner.go:195] Run: ls
	I0805 23:17:59.276794   35154 api_server.go:253] Checking apiserver healthz at https://192.168.39.254:8443/healthz ...
	I0805 23:17:59.281206   35154 api_server.go:279] https://192.168.39.254:8443/healthz returned 200:
	ok
	I0805 23:17:59.281229   35154 status.go:422] ha-044175-m03 apiserver status = Running (err=<nil>)
	I0805 23:17:59.281236   35154 status.go:257] ha-044175-m03 status: &{Name:ha-044175-m03 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0805 23:17:59.281249   35154 status.go:255] checking status of ha-044175-m04 ...
	I0805 23:17:59.281537   35154 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0805 23:17:59.281567   35154 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0805 23:17:59.296045   35154 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36289
	I0805 23:17:59.296427   35154 main.go:141] libmachine: () Calling .GetVersion
	I0805 23:17:59.296946   35154 main.go:141] libmachine: Using API Version  1
	I0805 23:17:59.296972   35154 main.go:141] libmachine: () Calling .SetConfigRaw
	I0805 23:17:59.297310   35154 main.go:141] libmachine: () Calling .GetMachineName
	I0805 23:17:59.297512   35154 main.go:141] libmachine: (ha-044175-m04) Calling .GetState
	I0805 23:17:59.299486   35154 status.go:330] ha-044175-m04 host status = "Running" (err=<nil>)
	I0805 23:17:59.299502   35154 host.go:66] Checking if "ha-044175-m04" exists ...
	I0805 23:17:59.299785   35154 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0805 23:17:59.299825   35154 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0805 23:17:59.316281   35154 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39923
	I0805 23:17:59.316642   35154 main.go:141] libmachine: () Calling .GetVersion
	I0805 23:17:59.317107   35154 main.go:141] libmachine: Using API Version  1
	I0805 23:17:59.317135   35154 main.go:141] libmachine: () Calling .SetConfigRaw
	I0805 23:17:59.317429   35154 main.go:141] libmachine: () Calling .GetMachineName
	I0805 23:17:59.317625   35154 main.go:141] libmachine: (ha-044175-m04) Calling .GetIP
	I0805 23:17:59.320251   35154 main.go:141] libmachine: (ha-044175-m04) DBG | domain ha-044175-m04 has defined MAC address 52:54:00:e5:ba:4d in network mk-ha-044175
	I0805 23:17:59.320658   35154 main.go:141] libmachine: (ha-044175-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e5:ba:4d", ip: ""} in network mk-ha-044175: {Iface:virbr1 ExpiryTime:2024-08-06 00:13:59 +0000 UTC Type:0 Mac:52:54:00:e5:ba:4d Iaid: IPaddr:192.168.39.228 Prefix:24 Hostname:ha-044175-m04 Clientid:01:52:54:00:e5:ba:4d}
	I0805 23:17:59.320687   35154 main.go:141] libmachine: (ha-044175-m04) DBG | domain ha-044175-m04 has defined IP address 192.168.39.228 and MAC address 52:54:00:e5:ba:4d in network mk-ha-044175
	I0805 23:17:59.320847   35154 host.go:66] Checking if "ha-044175-m04" exists ...
	I0805 23:17:59.321122   35154 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0805 23:17:59.321156   35154 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0805 23:17:59.336270   35154 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43285
	I0805 23:17:59.336723   35154 main.go:141] libmachine: () Calling .GetVersion
	I0805 23:17:59.337225   35154 main.go:141] libmachine: Using API Version  1
	I0805 23:17:59.337247   35154 main.go:141] libmachine: () Calling .SetConfigRaw
	I0805 23:17:59.337515   35154 main.go:141] libmachine: () Calling .GetMachineName
	I0805 23:17:59.337751   35154 main.go:141] libmachine: (ha-044175-m04) Calling .DriverName
	I0805 23:17:59.337986   35154 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0805 23:17:59.338008   35154 main.go:141] libmachine: (ha-044175-m04) Calling .GetSSHHostname
	I0805 23:17:59.341343   35154 main.go:141] libmachine: (ha-044175-m04) DBG | domain ha-044175-m04 has defined MAC address 52:54:00:e5:ba:4d in network mk-ha-044175
	I0805 23:17:59.341784   35154 main.go:141] libmachine: (ha-044175-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e5:ba:4d", ip: ""} in network mk-ha-044175: {Iface:virbr1 ExpiryTime:2024-08-06 00:13:59 +0000 UTC Type:0 Mac:52:54:00:e5:ba:4d Iaid: IPaddr:192.168.39.228 Prefix:24 Hostname:ha-044175-m04 Clientid:01:52:54:00:e5:ba:4d}
	I0805 23:17:59.341805   35154 main.go:141] libmachine: (ha-044175-m04) DBG | domain ha-044175-m04 has defined IP address 192.168.39.228 and MAC address 52:54:00:e5:ba:4d in network mk-ha-044175
	I0805 23:17:59.341990   35154 main.go:141] libmachine: (ha-044175-m04) Calling .GetSSHPort
	I0805 23:17:59.342217   35154 main.go:141] libmachine: (ha-044175-m04) Calling .GetSSHKeyPath
	I0805 23:17:59.342372   35154 main.go:141] libmachine: (ha-044175-m04) Calling .GetSSHUsername
	I0805 23:17:59.342519   35154 sshutil.go:53] new ssh client: &{IP:192.168.39.228 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19373-9606/.minikube/machines/ha-044175-m04/id_rsa Username:docker}
	I0805 23:17:59.422865   35154 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0805 23:17:59.438517   35154 status.go:257] ha-044175-m04 status: &{Name:ha-044175-m04 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
ha_test.go:428: (dbg) Run:  out/minikube-linux-amd64 -p ha-044175 status -v=7 --alsologtostderr
ha_test.go:428: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ha-044175 status -v=7 --alsologtostderr: exit status 7 (615.676216ms)

                                                
                                                
-- stdout --
	ha-044175
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-044175-m02
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-044175-m03
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-044175-m04
	type: Worker
	host: Running
	kubelet: Running
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0805 23:18:05.938442   35290 out.go:291] Setting OutFile to fd 1 ...
	I0805 23:18:05.938694   35290 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0805 23:18:05.938704   35290 out.go:304] Setting ErrFile to fd 2...
	I0805 23:18:05.938708   35290 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0805 23:18:05.938928   35290 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19373-9606/.minikube/bin
	I0805 23:18:05.939141   35290 out.go:298] Setting JSON to false
	I0805 23:18:05.939164   35290 mustload.go:65] Loading cluster: ha-044175
	I0805 23:18:05.939201   35290 notify.go:220] Checking for updates...
	I0805 23:18:05.939736   35290 config.go:182] Loaded profile config "ha-044175": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0805 23:18:05.939756   35290 status.go:255] checking status of ha-044175 ...
	I0805 23:18:05.940147   35290 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0805 23:18:05.940248   35290 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0805 23:18:05.959511   35290 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36653
	I0805 23:18:05.959930   35290 main.go:141] libmachine: () Calling .GetVersion
	I0805 23:18:05.960468   35290 main.go:141] libmachine: Using API Version  1
	I0805 23:18:05.960491   35290 main.go:141] libmachine: () Calling .SetConfigRaw
	I0805 23:18:05.960870   35290 main.go:141] libmachine: () Calling .GetMachineName
	I0805 23:18:05.961070   35290 main.go:141] libmachine: (ha-044175) Calling .GetState
	I0805 23:18:05.962609   35290 status.go:330] ha-044175 host status = "Running" (err=<nil>)
	I0805 23:18:05.962631   35290 host.go:66] Checking if "ha-044175" exists ...
	I0805 23:18:05.963038   35290 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0805 23:18:05.963096   35290 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0805 23:18:05.977947   35290 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41929
	I0805 23:18:05.978319   35290 main.go:141] libmachine: () Calling .GetVersion
	I0805 23:18:05.978759   35290 main.go:141] libmachine: Using API Version  1
	I0805 23:18:05.978780   35290 main.go:141] libmachine: () Calling .SetConfigRaw
	I0805 23:18:05.979071   35290 main.go:141] libmachine: () Calling .GetMachineName
	I0805 23:18:05.979259   35290 main.go:141] libmachine: (ha-044175) Calling .GetIP
	I0805 23:18:05.982158   35290 main.go:141] libmachine: (ha-044175) DBG | domain ha-044175 has defined MAC address 52:54:00:d0:5f:e4 in network mk-ha-044175
	I0805 23:18:05.982613   35290 main.go:141] libmachine: (ha-044175) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d0:5f:e4", ip: ""} in network mk-ha-044175: {Iface:virbr1 ExpiryTime:2024-08-06 00:10:15 +0000 UTC Type:0 Mac:52:54:00:d0:5f:e4 Iaid: IPaddr:192.168.39.57 Prefix:24 Hostname:ha-044175 Clientid:01:52:54:00:d0:5f:e4}
	I0805 23:18:05.982644   35290 main.go:141] libmachine: (ha-044175) DBG | domain ha-044175 has defined IP address 192.168.39.57 and MAC address 52:54:00:d0:5f:e4 in network mk-ha-044175
	I0805 23:18:05.982772   35290 host.go:66] Checking if "ha-044175" exists ...
	I0805 23:18:05.983202   35290 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0805 23:18:05.983258   35290 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0805 23:18:05.998932   35290 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35813
	I0805 23:18:05.999477   35290 main.go:141] libmachine: () Calling .GetVersion
	I0805 23:18:05.999995   35290 main.go:141] libmachine: Using API Version  1
	I0805 23:18:06.000015   35290 main.go:141] libmachine: () Calling .SetConfigRaw
	I0805 23:18:06.000311   35290 main.go:141] libmachine: () Calling .GetMachineName
	I0805 23:18:06.000495   35290 main.go:141] libmachine: (ha-044175) Calling .DriverName
	I0805 23:18:06.000737   35290 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0805 23:18:06.000764   35290 main.go:141] libmachine: (ha-044175) Calling .GetSSHHostname
	I0805 23:18:06.003986   35290 main.go:141] libmachine: (ha-044175) DBG | domain ha-044175 has defined MAC address 52:54:00:d0:5f:e4 in network mk-ha-044175
	I0805 23:18:06.004457   35290 main.go:141] libmachine: (ha-044175) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d0:5f:e4", ip: ""} in network mk-ha-044175: {Iface:virbr1 ExpiryTime:2024-08-06 00:10:15 +0000 UTC Type:0 Mac:52:54:00:d0:5f:e4 Iaid: IPaddr:192.168.39.57 Prefix:24 Hostname:ha-044175 Clientid:01:52:54:00:d0:5f:e4}
	I0805 23:18:06.004500   35290 main.go:141] libmachine: (ha-044175) DBG | domain ha-044175 has defined IP address 192.168.39.57 and MAC address 52:54:00:d0:5f:e4 in network mk-ha-044175
	I0805 23:18:06.004655   35290 main.go:141] libmachine: (ha-044175) Calling .GetSSHPort
	I0805 23:18:06.004846   35290 main.go:141] libmachine: (ha-044175) Calling .GetSSHKeyPath
	I0805 23:18:06.005002   35290 main.go:141] libmachine: (ha-044175) Calling .GetSSHUsername
	I0805 23:18:06.005129   35290 sshutil.go:53] new ssh client: &{IP:192.168.39.57 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19373-9606/.minikube/machines/ha-044175/id_rsa Username:docker}
	I0805 23:18:06.082753   35290 ssh_runner.go:195] Run: systemctl --version
	I0805 23:18:06.089151   35290 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0805 23:18:06.106554   35290 kubeconfig.go:125] found "ha-044175" server: "https://192.168.39.254:8443"
	I0805 23:18:06.106581   35290 api_server.go:166] Checking apiserver status ...
	I0805 23:18:06.106612   35290 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0805 23:18:06.121365   35290 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1201/cgroup
	W0805 23:18:06.132013   35290 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1201/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0805 23:18:06.132061   35290 ssh_runner.go:195] Run: ls
	I0805 23:18:06.136858   35290 api_server.go:253] Checking apiserver healthz at https://192.168.39.254:8443/healthz ...
	I0805 23:18:06.143940   35290 api_server.go:279] https://192.168.39.254:8443/healthz returned 200:
	ok
	I0805 23:18:06.143971   35290 status.go:422] ha-044175 apiserver status = Running (err=<nil>)
	I0805 23:18:06.143984   35290 status.go:257] ha-044175 status: &{Name:ha-044175 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0805 23:18:06.144008   35290 status.go:255] checking status of ha-044175-m02 ...
	I0805 23:18:06.144440   35290 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0805 23:18:06.144493   35290 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0805 23:18:06.160911   35290 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42109
	I0805 23:18:06.161342   35290 main.go:141] libmachine: () Calling .GetVersion
	I0805 23:18:06.161867   35290 main.go:141] libmachine: Using API Version  1
	I0805 23:18:06.161890   35290 main.go:141] libmachine: () Calling .SetConfigRaw
	I0805 23:18:06.162222   35290 main.go:141] libmachine: () Calling .GetMachineName
	I0805 23:18:06.162404   35290 main.go:141] libmachine: (ha-044175-m02) Calling .GetState
	I0805 23:18:06.164003   35290 status.go:330] ha-044175-m02 host status = "Stopped" (err=<nil>)
	I0805 23:18:06.164016   35290 status.go:343] host is not running, skipping remaining checks
	I0805 23:18:06.164023   35290 status.go:257] ha-044175-m02 status: &{Name:ha-044175-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0805 23:18:06.164040   35290 status.go:255] checking status of ha-044175-m03 ...
	I0805 23:18:06.164415   35290 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0805 23:18:06.164456   35290 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0805 23:18:06.178825   35290 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35635
	I0805 23:18:06.179332   35290 main.go:141] libmachine: () Calling .GetVersion
	I0805 23:18:06.179796   35290 main.go:141] libmachine: Using API Version  1
	I0805 23:18:06.179823   35290 main.go:141] libmachine: () Calling .SetConfigRaw
	I0805 23:18:06.180135   35290 main.go:141] libmachine: () Calling .GetMachineName
	I0805 23:18:06.180316   35290 main.go:141] libmachine: (ha-044175-m03) Calling .GetState
	I0805 23:18:06.181989   35290 status.go:330] ha-044175-m03 host status = "Running" (err=<nil>)
	I0805 23:18:06.182003   35290 host.go:66] Checking if "ha-044175-m03" exists ...
	I0805 23:18:06.182287   35290 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0805 23:18:06.182326   35290 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0805 23:18:06.198628   35290 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34129
	I0805 23:18:06.199042   35290 main.go:141] libmachine: () Calling .GetVersion
	I0805 23:18:06.199479   35290 main.go:141] libmachine: Using API Version  1
	I0805 23:18:06.199508   35290 main.go:141] libmachine: () Calling .SetConfigRaw
	I0805 23:18:06.199818   35290 main.go:141] libmachine: () Calling .GetMachineName
	I0805 23:18:06.199992   35290 main.go:141] libmachine: (ha-044175-m03) Calling .GetIP
	I0805 23:18:06.202694   35290 main.go:141] libmachine: (ha-044175-m03) DBG | domain ha-044175-m03 has defined MAC address 52:54:00:f4:37:04 in network mk-ha-044175
	I0805 23:18:06.203035   35290 main.go:141] libmachine: (ha-044175-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f4:37:04", ip: ""} in network mk-ha-044175: {Iface:virbr1 ExpiryTime:2024-08-06 00:12:31 +0000 UTC Type:0 Mac:52:54:00:f4:37:04 Iaid: IPaddr:192.168.39.201 Prefix:24 Hostname:ha-044175-m03 Clientid:01:52:54:00:f4:37:04}
	I0805 23:18:06.203080   35290 main.go:141] libmachine: (ha-044175-m03) DBG | domain ha-044175-m03 has defined IP address 192.168.39.201 and MAC address 52:54:00:f4:37:04 in network mk-ha-044175
	I0805 23:18:06.203232   35290 host.go:66] Checking if "ha-044175-m03" exists ...
	I0805 23:18:06.203544   35290 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0805 23:18:06.203587   35290 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0805 23:18:06.217797   35290 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42623
	I0805 23:18:06.218189   35290 main.go:141] libmachine: () Calling .GetVersion
	I0805 23:18:06.218620   35290 main.go:141] libmachine: Using API Version  1
	I0805 23:18:06.218639   35290 main.go:141] libmachine: () Calling .SetConfigRaw
	I0805 23:18:06.218903   35290 main.go:141] libmachine: () Calling .GetMachineName
	I0805 23:18:06.219063   35290 main.go:141] libmachine: (ha-044175-m03) Calling .DriverName
	I0805 23:18:06.219254   35290 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0805 23:18:06.219276   35290 main.go:141] libmachine: (ha-044175-m03) Calling .GetSSHHostname
	I0805 23:18:06.221907   35290 main.go:141] libmachine: (ha-044175-m03) DBG | domain ha-044175-m03 has defined MAC address 52:54:00:f4:37:04 in network mk-ha-044175
	I0805 23:18:06.222286   35290 main.go:141] libmachine: (ha-044175-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f4:37:04", ip: ""} in network mk-ha-044175: {Iface:virbr1 ExpiryTime:2024-08-06 00:12:31 +0000 UTC Type:0 Mac:52:54:00:f4:37:04 Iaid: IPaddr:192.168.39.201 Prefix:24 Hostname:ha-044175-m03 Clientid:01:52:54:00:f4:37:04}
	I0805 23:18:06.222311   35290 main.go:141] libmachine: (ha-044175-m03) DBG | domain ha-044175-m03 has defined IP address 192.168.39.201 and MAC address 52:54:00:f4:37:04 in network mk-ha-044175
	I0805 23:18:06.222418   35290 main.go:141] libmachine: (ha-044175-m03) Calling .GetSSHPort
	I0805 23:18:06.222576   35290 main.go:141] libmachine: (ha-044175-m03) Calling .GetSSHKeyPath
	I0805 23:18:06.222725   35290 main.go:141] libmachine: (ha-044175-m03) Calling .GetSSHUsername
	I0805 23:18:06.222833   35290 sshutil.go:53] new ssh client: &{IP:192.168.39.201 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19373-9606/.minikube/machines/ha-044175-m03/id_rsa Username:docker}
	I0805 23:18:06.307111   35290 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0805 23:18:06.321720   35290 kubeconfig.go:125] found "ha-044175" server: "https://192.168.39.254:8443"
	I0805 23:18:06.321759   35290 api_server.go:166] Checking apiserver status ...
	I0805 23:18:06.321805   35290 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0805 23:18:06.335277   35290 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1566/cgroup
	W0805 23:18:06.345516   35290 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1566/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0805 23:18:06.345575   35290 ssh_runner.go:195] Run: ls
	I0805 23:18:06.350350   35290 api_server.go:253] Checking apiserver healthz at https://192.168.39.254:8443/healthz ...
	I0805 23:18:06.354848   35290 api_server.go:279] https://192.168.39.254:8443/healthz returned 200:
	ok
	I0805 23:18:06.354874   35290 status.go:422] ha-044175-m03 apiserver status = Running (err=<nil>)
	I0805 23:18:06.354885   35290 status.go:257] ha-044175-m03 status: &{Name:ha-044175-m03 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0805 23:18:06.354903   35290 status.go:255] checking status of ha-044175-m04 ...
	I0805 23:18:06.355319   35290 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0805 23:18:06.355364   35290 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0805 23:18:06.370336   35290 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37041
	I0805 23:18:06.370777   35290 main.go:141] libmachine: () Calling .GetVersion
	I0805 23:18:06.371205   35290 main.go:141] libmachine: Using API Version  1
	I0805 23:18:06.371229   35290 main.go:141] libmachine: () Calling .SetConfigRaw
	I0805 23:18:06.371516   35290 main.go:141] libmachine: () Calling .GetMachineName
	I0805 23:18:06.371683   35290 main.go:141] libmachine: (ha-044175-m04) Calling .GetState
	I0805 23:18:06.373180   35290 status.go:330] ha-044175-m04 host status = "Running" (err=<nil>)
	I0805 23:18:06.373207   35290 host.go:66] Checking if "ha-044175-m04" exists ...
	I0805 23:18:06.373507   35290 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0805 23:18:06.373538   35290 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0805 23:18:06.388201   35290 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41203
	I0805 23:18:06.388640   35290 main.go:141] libmachine: () Calling .GetVersion
	I0805 23:18:06.389166   35290 main.go:141] libmachine: Using API Version  1
	I0805 23:18:06.389187   35290 main.go:141] libmachine: () Calling .SetConfigRaw
	I0805 23:18:06.389494   35290 main.go:141] libmachine: () Calling .GetMachineName
	I0805 23:18:06.389670   35290 main.go:141] libmachine: (ha-044175-m04) Calling .GetIP
	I0805 23:18:06.392329   35290 main.go:141] libmachine: (ha-044175-m04) DBG | domain ha-044175-m04 has defined MAC address 52:54:00:e5:ba:4d in network mk-ha-044175
	I0805 23:18:06.392809   35290 main.go:141] libmachine: (ha-044175-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e5:ba:4d", ip: ""} in network mk-ha-044175: {Iface:virbr1 ExpiryTime:2024-08-06 00:13:59 +0000 UTC Type:0 Mac:52:54:00:e5:ba:4d Iaid: IPaddr:192.168.39.228 Prefix:24 Hostname:ha-044175-m04 Clientid:01:52:54:00:e5:ba:4d}
	I0805 23:18:06.392827   35290 main.go:141] libmachine: (ha-044175-m04) DBG | domain ha-044175-m04 has defined IP address 192.168.39.228 and MAC address 52:54:00:e5:ba:4d in network mk-ha-044175
	I0805 23:18:06.392997   35290 host.go:66] Checking if "ha-044175-m04" exists ...
	I0805 23:18:06.393341   35290 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0805 23:18:06.393375   35290 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0805 23:18:06.408813   35290 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42263
	I0805 23:18:06.409204   35290 main.go:141] libmachine: () Calling .GetVersion
	I0805 23:18:06.409579   35290 main.go:141] libmachine: Using API Version  1
	I0805 23:18:06.409599   35290 main.go:141] libmachine: () Calling .SetConfigRaw
	I0805 23:18:06.409874   35290 main.go:141] libmachine: () Calling .GetMachineName
	I0805 23:18:06.410019   35290 main.go:141] libmachine: (ha-044175-m04) Calling .DriverName
	I0805 23:18:06.410205   35290 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0805 23:18:06.410242   35290 main.go:141] libmachine: (ha-044175-m04) Calling .GetSSHHostname
	I0805 23:18:06.412970   35290 main.go:141] libmachine: (ha-044175-m04) DBG | domain ha-044175-m04 has defined MAC address 52:54:00:e5:ba:4d in network mk-ha-044175
	I0805 23:18:06.413388   35290 main.go:141] libmachine: (ha-044175-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e5:ba:4d", ip: ""} in network mk-ha-044175: {Iface:virbr1 ExpiryTime:2024-08-06 00:13:59 +0000 UTC Type:0 Mac:52:54:00:e5:ba:4d Iaid: IPaddr:192.168.39.228 Prefix:24 Hostname:ha-044175-m04 Clientid:01:52:54:00:e5:ba:4d}
	I0805 23:18:06.413411   35290 main.go:141] libmachine: (ha-044175-m04) DBG | domain ha-044175-m04 has defined IP address 192.168.39.228 and MAC address 52:54:00:e5:ba:4d in network mk-ha-044175
	I0805 23:18:06.413573   35290 main.go:141] libmachine: (ha-044175-m04) Calling .GetSSHPort
	I0805 23:18:06.413725   35290 main.go:141] libmachine: (ha-044175-m04) Calling .GetSSHKeyPath
	I0805 23:18:06.413894   35290 main.go:141] libmachine: (ha-044175-m04) Calling .GetSSHUsername
	I0805 23:18:06.414038   35290 sshutil.go:53] new ssh client: &{IP:192.168.39.228 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19373-9606/.minikube/machines/ha-044175-m04/id_rsa Username:docker}
	I0805 23:18:06.495297   35290 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0805 23:18:06.512143   35290 status.go:257] ha-044175-m04 status: &{Name:ha-044175-m04 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
ha_test.go:432: failed to run minikube status. args "out/minikube-linux-amd64 -p ha-044175 status -v=7 --alsologtostderr" : exit status 7
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p ha-044175 -n ha-044175
helpers_test.go:244: <<< TestMultiControlPlane/serial/RestartSecondaryNode FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestMultiControlPlane/serial/RestartSecondaryNode]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p ha-044175 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p ha-044175 logs -n 25: (1.428018986s)
helpers_test.go:252: TestMultiControlPlane/serial/RestartSecondaryNode logs: 
-- stdout --
	
	==> Audit <==
	|---------|----------------------------------------------------------------------------------|-----------|---------|---------|---------------------|---------------------|
	| Command |                                       Args                                       |  Profile  |  User   | Version |     Start Time      |      End Time       |
	|---------|----------------------------------------------------------------------------------|-----------|---------|---------|---------------------|---------------------|
	| ssh     | ha-044175 ssh -n                                                                 | ha-044175 | jenkins | v1.33.1 | 05 Aug 24 23:14 UTC | 05 Aug 24 23:14 UTC |
	|         | ha-044175-m03 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| cp      | ha-044175 cp ha-044175-m03:/home/docker/cp-test.txt                              | ha-044175 | jenkins | v1.33.1 | 05 Aug 24 23:14 UTC | 05 Aug 24 23:14 UTC |
	|         | ha-044175:/home/docker/cp-test_ha-044175-m03_ha-044175.txt                       |           |         |         |                     |                     |
	| ssh     | ha-044175 ssh -n                                                                 | ha-044175 | jenkins | v1.33.1 | 05 Aug 24 23:14 UTC | 05 Aug 24 23:14 UTC |
	|         | ha-044175-m03 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-044175 ssh -n ha-044175 sudo cat                                              | ha-044175 | jenkins | v1.33.1 | 05 Aug 24 23:14 UTC | 05 Aug 24 23:14 UTC |
	|         | /home/docker/cp-test_ha-044175-m03_ha-044175.txt                                 |           |         |         |                     |                     |
	| cp      | ha-044175 cp ha-044175-m03:/home/docker/cp-test.txt                              | ha-044175 | jenkins | v1.33.1 | 05 Aug 24 23:14 UTC | 05 Aug 24 23:14 UTC |
	|         | ha-044175-m02:/home/docker/cp-test_ha-044175-m03_ha-044175-m02.txt               |           |         |         |                     |                     |
	| ssh     | ha-044175 ssh -n                                                                 | ha-044175 | jenkins | v1.33.1 | 05 Aug 24 23:14 UTC | 05 Aug 24 23:14 UTC |
	|         | ha-044175-m03 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-044175 ssh -n ha-044175-m02 sudo cat                                          | ha-044175 | jenkins | v1.33.1 | 05 Aug 24 23:14 UTC | 05 Aug 24 23:14 UTC |
	|         | /home/docker/cp-test_ha-044175-m03_ha-044175-m02.txt                             |           |         |         |                     |                     |
	| cp      | ha-044175 cp ha-044175-m03:/home/docker/cp-test.txt                              | ha-044175 | jenkins | v1.33.1 | 05 Aug 24 23:14 UTC | 05 Aug 24 23:14 UTC |
	|         | ha-044175-m04:/home/docker/cp-test_ha-044175-m03_ha-044175-m04.txt               |           |         |         |                     |                     |
	| ssh     | ha-044175 ssh -n                                                                 | ha-044175 | jenkins | v1.33.1 | 05 Aug 24 23:14 UTC | 05 Aug 24 23:14 UTC |
	|         | ha-044175-m03 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-044175 ssh -n ha-044175-m04 sudo cat                                          | ha-044175 | jenkins | v1.33.1 | 05 Aug 24 23:14 UTC | 05 Aug 24 23:14 UTC |
	|         | /home/docker/cp-test_ha-044175-m03_ha-044175-m04.txt                             |           |         |         |                     |                     |
	| cp      | ha-044175 cp testdata/cp-test.txt                                                | ha-044175 | jenkins | v1.33.1 | 05 Aug 24 23:14 UTC | 05 Aug 24 23:14 UTC |
	|         | ha-044175-m04:/home/docker/cp-test.txt                                           |           |         |         |                     |                     |
	| ssh     | ha-044175 ssh -n                                                                 | ha-044175 | jenkins | v1.33.1 | 05 Aug 24 23:14 UTC | 05 Aug 24 23:14 UTC |
	|         | ha-044175-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| cp      | ha-044175 cp ha-044175-m04:/home/docker/cp-test.txt                              | ha-044175 | jenkins | v1.33.1 | 05 Aug 24 23:14 UTC | 05 Aug 24 23:14 UTC |
	|         | /tmp/TestMultiControlPlaneserialCopyFile3481107746/001/cp-test_ha-044175-m04.txt |           |         |         |                     |                     |
	| ssh     | ha-044175 ssh -n                                                                 | ha-044175 | jenkins | v1.33.1 | 05 Aug 24 23:14 UTC | 05 Aug 24 23:14 UTC |
	|         | ha-044175-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| cp      | ha-044175 cp ha-044175-m04:/home/docker/cp-test.txt                              | ha-044175 | jenkins | v1.33.1 | 05 Aug 24 23:14 UTC | 05 Aug 24 23:14 UTC |
	|         | ha-044175:/home/docker/cp-test_ha-044175-m04_ha-044175.txt                       |           |         |         |                     |                     |
	| ssh     | ha-044175 ssh -n                                                                 | ha-044175 | jenkins | v1.33.1 | 05 Aug 24 23:14 UTC | 05 Aug 24 23:14 UTC |
	|         | ha-044175-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-044175 ssh -n ha-044175 sudo cat                                              | ha-044175 | jenkins | v1.33.1 | 05 Aug 24 23:14 UTC | 05 Aug 24 23:14 UTC |
	|         | /home/docker/cp-test_ha-044175-m04_ha-044175.txt                                 |           |         |         |                     |                     |
	| cp      | ha-044175 cp ha-044175-m04:/home/docker/cp-test.txt                              | ha-044175 | jenkins | v1.33.1 | 05 Aug 24 23:14 UTC | 05 Aug 24 23:14 UTC |
	|         | ha-044175-m02:/home/docker/cp-test_ha-044175-m04_ha-044175-m02.txt               |           |         |         |                     |                     |
	| ssh     | ha-044175 ssh -n                                                                 | ha-044175 | jenkins | v1.33.1 | 05 Aug 24 23:14 UTC | 05 Aug 24 23:14 UTC |
	|         | ha-044175-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-044175 ssh -n ha-044175-m02 sudo cat                                          | ha-044175 | jenkins | v1.33.1 | 05 Aug 24 23:14 UTC | 05 Aug 24 23:14 UTC |
	|         | /home/docker/cp-test_ha-044175-m04_ha-044175-m02.txt                             |           |         |         |                     |                     |
	| cp      | ha-044175 cp ha-044175-m04:/home/docker/cp-test.txt                              | ha-044175 | jenkins | v1.33.1 | 05 Aug 24 23:14 UTC | 05 Aug 24 23:14 UTC |
	|         | ha-044175-m03:/home/docker/cp-test_ha-044175-m04_ha-044175-m03.txt               |           |         |         |                     |                     |
	| ssh     | ha-044175 ssh -n                                                                 | ha-044175 | jenkins | v1.33.1 | 05 Aug 24 23:14 UTC | 05 Aug 24 23:14 UTC |
	|         | ha-044175-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-044175 ssh -n ha-044175-m03 sudo cat                                          | ha-044175 | jenkins | v1.33.1 | 05 Aug 24 23:14 UTC | 05 Aug 24 23:14 UTC |
	|         | /home/docker/cp-test_ha-044175-m04_ha-044175-m03.txt                             |           |         |         |                     |                     |
	| node    | ha-044175 node stop m02 -v=7                                                     | ha-044175 | jenkins | v1.33.1 | 05 Aug 24 23:14 UTC |                     |
	|         | --alsologtostderr                                                                |           |         |         |                     |                     |
	| node    | ha-044175 node start m02 -v=7                                                    | ha-044175 | jenkins | v1.33.1 | 05 Aug 24 23:17 UTC |                     |
	|         | --alsologtostderr                                                                |           |         |         |                     |                     |
	|---------|----------------------------------------------------------------------------------|-----------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/08/05 23:10:00
	Running on machine: ubuntu-20-agent-10
	Binary: Built with gc go1.22.5 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0805 23:10:00.718936   28839 out.go:291] Setting OutFile to fd 1 ...
	I0805 23:10:00.719071   28839 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0805 23:10:00.719082   28839 out.go:304] Setting ErrFile to fd 2...
	I0805 23:10:00.719089   28839 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0805 23:10:00.719264   28839 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19373-9606/.minikube/bin
	I0805 23:10:00.719821   28839 out.go:298] Setting JSON to false
	I0805 23:10:00.720707   28839 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-10","uptime":3147,"bootTime":1722896254,"procs":185,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1065-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0805 23:10:00.720765   28839 start.go:139] virtualization: kvm guest
	I0805 23:10:00.723090   28839 out.go:177] * [ha-044175] minikube v1.33.1 on Ubuntu 20.04 (kvm/amd64)
	I0805 23:10:00.724859   28839 notify.go:220] Checking for updates...
	I0805 23:10:00.724881   28839 out.go:177]   - MINIKUBE_LOCATION=19373
	I0805 23:10:00.726355   28839 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0805 23:10:00.727722   28839 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19373-9606/kubeconfig
	I0805 23:10:00.729247   28839 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19373-9606/.minikube
	I0805 23:10:00.730647   28839 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0805 23:10:00.731953   28839 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0805 23:10:00.733364   28839 driver.go:392] Setting default libvirt URI to qemu:///system
	I0805 23:10:00.768508   28839 out.go:177] * Using the kvm2 driver based on user configuration
	I0805 23:10:00.769796   28839 start.go:297] selected driver: kvm2
	I0805 23:10:00.769817   28839 start.go:901] validating driver "kvm2" against <nil>
	I0805 23:10:00.769828   28839 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0805 23:10:00.770541   28839 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0805 23:10:00.770614   28839 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/19373-9606/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0805 23:10:00.786160   28839 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.33.1
	I0805 23:10:00.786223   28839 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0805 23:10:00.786474   28839 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0805 23:10:00.786523   28839 cni.go:84] Creating CNI manager for ""
	I0805 23:10:00.786533   28839 cni.go:136] multinode detected (0 nodes found), recommending kindnet
	I0805 23:10:00.786537   28839 start_flags.go:319] Found "CNI" CNI - setting NetworkPlugin=cni
	I0805 23:10:00.786605   28839 start.go:340] cluster config:
	{Name:ha-044175 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.3 ClusterName:ha-044175 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRIS
ocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0
GPUs: AutoPauseInterval:1m0s}
	I0805 23:10:00.786703   28839 iso.go:125] acquiring lock: {Name:mk54a637ed625e04bb2b6adf973b61c976cd6d35 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0805 23:10:00.788752   28839 out.go:177] * Starting "ha-044175" primary control-plane node in "ha-044175" cluster
	I0805 23:10:00.790061   28839 preload.go:131] Checking if preload exists for k8s version v1.30.3 and runtime crio
	I0805 23:10:00.790106   28839 preload.go:146] Found local preload: /home/jenkins/minikube-integration/19373-9606/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-cri-o-overlay-amd64.tar.lz4
	I0805 23:10:00.790113   28839 cache.go:56] Caching tarball of preloaded images
	I0805 23:10:00.790183   28839 preload.go:172] Found /home/jenkins/minikube-integration/19373-9606/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0805 23:10:00.790193   28839 cache.go:59] Finished verifying existence of preloaded tar for v1.30.3 on crio
	I0805 23:10:00.790469   28839 profile.go:143] Saving config to /home/jenkins/minikube-integration/19373-9606/.minikube/profiles/ha-044175/config.json ...
	I0805 23:10:00.790488   28839 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19373-9606/.minikube/profiles/ha-044175/config.json: {Name:mk8c38569b7ea25c26897d16a4c42d0fe2104a00 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0805 23:10:00.790610   28839 start.go:360] acquireMachinesLock for ha-044175: {Name:mkd2ba511c39504598222edbf83078b718329186 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0805 23:10:00.790645   28839 start.go:364] duration metric: took 22.585µs to acquireMachinesLock for "ha-044175"
	I0805 23:10:00.790660   28839 start.go:93] Provisioning new machine with config: &{Name:ha-044175 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19339/minikube-v1.33.1-1722248113-19339-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kubernete
sVersion:v1.30.3 ClusterName:ha-044175 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:
9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0805 23:10:00.790713   28839 start.go:125] createHost starting for "" (driver="kvm2")
	I0805 23:10:00.793461   28839 out.go:204] * Creating kvm2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0805 23:10:00.793604   28839 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0805 23:10:00.793643   28839 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0805 23:10:00.807872   28839 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44653
	I0805 23:10:00.808276   28839 main.go:141] libmachine: () Calling .GetVersion
	I0805 23:10:00.808860   28839 main.go:141] libmachine: Using API Version  1
	I0805 23:10:00.808885   28839 main.go:141] libmachine: () Calling .SetConfigRaw
	I0805 23:10:00.809199   28839 main.go:141] libmachine: () Calling .GetMachineName
	I0805 23:10:00.809389   28839 main.go:141] libmachine: (ha-044175) Calling .GetMachineName
	I0805 23:10:00.809553   28839 main.go:141] libmachine: (ha-044175) Calling .DriverName
	I0805 23:10:00.809686   28839 start.go:159] libmachine.API.Create for "ha-044175" (driver="kvm2")
	I0805 23:10:00.809713   28839 client.go:168] LocalClient.Create starting
	I0805 23:10:00.809747   28839 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/19373-9606/.minikube/certs/ca.pem
	I0805 23:10:00.809793   28839 main.go:141] libmachine: Decoding PEM data...
	I0805 23:10:00.809818   28839 main.go:141] libmachine: Parsing certificate...
	I0805 23:10:00.809891   28839 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/19373-9606/.minikube/certs/cert.pem
	I0805 23:10:00.809919   28839 main.go:141] libmachine: Decoding PEM data...
	I0805 23:10:00.809938   28839 main.go:141] libmachine: Parsing certificate...
	I0805 23:10:00.809964   28839 main.go:141] libmachine: Running pre-create checks...
	I0805 23:10:00.809977   28839 main.go:141] libmachine: (ha-044175) Calling .PreCreateCheck
	I0805 23:10:00.810289   28839 main.go:141] libmachine: (ha-044175) Calling .GetConfigRaw
	I0805 23:10:00.810647   28839 main.go:141] libmachine: Creating machine...
	I0805 23:10:00.810661   28839 main.go:141] libmachine: (ha-044175) Calling .Create
	I0805 23:10:00.810782   28839 main.go:141] libmachine: (ha-044175) Creating KVM machine...
	I0805 23:10:00.811931   28839 main.go:141] libmachine: (ha-044175) DBG | found existing default KVM network
	I0805 23:10:00.812569   28839 main.go:141] libmachine: (ha-044175) DBG | I0805 23:10:00.812433   28863 network.go:206] using free private subnet 192.168.39.0/24: &{IP:192.168.39.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.39.0/24 Gateway:192.168.39.1 ClientMin:192.168.39.2 ClientMax:192.168.39.254 Broadcast:192.168.39.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc000015ac0}
	I0805 23:10:00.812588   28839 main.go:141] libmachine: (ha-044175) DBG | created network xml: 
	I0805 23:10:00.812600   28839 main.go:141] libmachine: (ha-044175) DBG | <network>
	I0805 23:10:00.812607   28839 main.go:141] libmachine: (ha-044175) DBG |   <name>mk-ha-044175</name>
	I0805 23:10:00.812617   28839 main.go:141] libmachine: (ha-044175) DBG |   <dns enable='no'/>
	I0805 23:10:00.812632   28839 main.go:141] libmachine: (ha-044175) DBG |   
	I0805 23:10:00.812671   28839 main.go:141] libmachine: (ha-044175) DBG |   <ip address='192.168.39.1' netmask='255.255.255.0'>
	I0805 23:10:00.812692   28839 main.go:141] libmachine: (ha-044175) DBG |     <dhcp>
	I0805 23:10:00.812704   28839 main.go:141] libmachine: (ha-044175) DBG |       <range start='192.168.39.2' end='192.168.39.253'/>
	I0805 23:10:00.812725   28839 main.go:141] libmachine: (ha-044175) DBG |     </dhcp>
	I0805 23:10:00.812748   28839 main.go:141] libmachine: (ha-044175) DBG |   </ip>
	I0805 23:10:00.812764   28839 main.go:141] libmachine: (ha-044175) DBG |   
	I0805 23:10:00.812773   28839 main.go:141] libmachine: (ha-044175) DBG | </network>
	I0805 23:10:00.812777   28839 main.go:141] libmachine: (ha-044175) DBG | 
	I0805 23:10:00.817725   28839 main.go:141] libmachine: (ha-044175) DBG | trying to create private KVM network mk-ha-044175 192.168.39.0/24...
	I0805 23:10:00.882500   28839 main.go:141] libmachine: (ha-044175) DBG | private KVM network mk-ha-044175 192.168.39.0/24 created
	I0805 23:10:00.882533   28839 main.go:141] libmachine: (ha-044175) DBG | I0805 23:10:00.882456   28863 common.go:145] Making disk image using store path: /home/jenkins/minikube-integration/19373-9606/.minikube
	I0805 23:10:00.882547   28839 main.go:141] libmachine: (ha-044175) Setting up store path in /home/jenkins/minikube-integration/19373-9606/.minikube/machines/ha-044175 ...
	I0805 23:10:00.882567   28839 main.go:141] libmachine: (ha-044175) Building disk image from file:///home/jenkins/minikube-integration/19373-9606/.minikube/cache/iso/amd64/minikube-v1.33.1-1722248113-19339-amd64.iso
	I0805 23:10:00.882636   28839 main.go:141] libmachine: (ha-044175) Downloading /home/jenkins/minikube-integration/19373-9606/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/19373-9606/.minikube/cache/iso/amd64/minikube-v1.33.1-1722248113-19339-amd64.iso...
	I0805 23:10:01.119900   28839 main.go:141] libmachine: (ha-044175) DBG | I0805 23:10:01.119732   28863 common.go:152] Creating ssh key: /home/jenkins/minikube-integration/19373-9606/.minikube/machines/ha-044175/id_rsa...
	I0805 23:10:01.238103   28839 main.go:141] libmachine: (ha-044175) DBG | I0805 23:10:01.237978   28863 common.go:158] Creating raw disk image: /home/jenkins/minikube-integration/19373-9606/.minikube/machines/ha-044175/ha-044175.rawdisk...
	I0805 23:10:01.238131   28839 main.go:141] libmachine: (ha-044175) DBG | Writing magic tar header
	I0805 23:10:01.238142   28839 main.go:141] libmachine: (ha-044175) DBG | Writing SSH key tar header
	I0805 23:10:01.238149   28839 main.go:141] libmachine: (ha-044175) DBG | I0805 23:10:01.238092   28863 common.go:172] Fixing permissions on /home/jenkins/minikube-integration/19373-9606/.minikube/machines/ha-044175 ...
	I0805 23:10:01.238295   28839 main.go:141] libmachine: (ha-044175) Setting executable bit set on /home/jenkins/minikube-integration/19373-9606/.minikube/machines/ha-044175 (perms=drwx------)
	I0805 23:10:01.238326   28839 main.go:141] libmachine: (ha-044175) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19373-9606/.minikube/machines/ha-044175
	I0805 23:10:01.238337   28839 main.go:141] libmachine: (ha-044175) Setting executable bit set on /home/jenkins/minikube-integration/19373-9606/.minikube/machines (perms=drwxr-xr-x)
	I0805 23:10:01.238364   28839 main.go:141] libmachine: (ha-044175) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19373-9606/.minikube/machines
	I0805 23:10:01.238383   28839 main.go:141] libmachine: (ha-044175) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19373-9606/.minikube
	I0805 23:10:01.238397   28839 main.go:141] libmachine: (ha-044175) Setting executable bit set on /home/jenkins/minikube-integration/19373-9606/.minikube (perms=drwxr-xr-x)
	I0805 23:10:01.238410   28839 main.go:141] libmachine: (ha-044175) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19373-9606
	I0805 23:10:01.238434   28839 main.go:141] libmachine: (ha-044175) DBG | Checking permissions on dir: /home/jenkins/minikube-integration
	I0805 23:10:01.238450   28839 main.go:141] libmachine: (ha-044175) Setting executable bit set on /home/jenkins/minikube-integration/19373-9606 (perms=drwxrwxr-x)
	I0805 23:10:01.238458   28839 main.go:141] libmachine: (ha-044175) DBG | Checking permissions on dir: /home/jenkins
	I0805 23:10:01.238477   28839 main.go:141] libmachine: (ha-044175) DBG | Checking permissions on dir: /home
	I0805 23:10:01.238489   28839 main.go:141] libmachine: (ha-044175) Setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I0805 23:10:01.238496   28839 main.go:141] libmachine: (ha-044175) DBG | Skipping /home - not owner
	I0805 23:10:01.238508   28839 main.go:141] libmachine: (ha-044175) Setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I0805 23:10:01.238518   28839 main.go:141] libmachine: (ha-044175) Creating domain...
	I0805 23:10:01.239457   28839 main.go:141] libmachine: (ha-044175) define libvirt domain using xml: 
	I0805 23:10:01.239477   28839 main.go:141] libmachine: (ha-044175) <domain type='kvm'>
	I0805 23:10:01.239487   28839 main.go:141] libmachine: (ha-044175)   <name>ha-044175</name>
	I0805 23:10:01.239496   28839 main.go:141] libmachine: (ha-044175)   <memory unit='MiB'>2200</memory>
	I0805 23:10:01.239503   28839 main.go:141] libmachine: (ha-044175)   <vcpu>2</vcpu>
	I0805 23:10:01.239508   28839 main.go:141] libmachine: (ha-044175)   <features>
	I0805 23:10:01.239517   28839 main.go:141] libmachine: (ha-044175)     <acpi/>
	I0805 23:10:01.239521   28839 main.go:141] libmachine: (ha-044175)     <apic/>
	I0805 23:10:01.239528   28839 main.go:141] libmachine: (ha-044175)     <pae/>
	I0805 23:10:01.239543   28839 main.go:141] libmachine: (ha-044175)     
	I0805 23:10:01.239566   28839 main.go:141] libmachine: (ha-044175)   </features>
	I0805 23:10:01.239586   28839 main.go:141] libmachine: (ha-044175)   <cpu mode='host-passthrough'>
	I0805 23:10:01.239598   28839 main.go:141] libmachine: (ha-044175)   
	I0805 23:10:01.239605   28839 main.go:141] libmachine: (ha-044175)   </cpu>
	I0805 23:10:01.239615   28839 main.go:141] libmachine: (ha-044175)   <os>
	I0805 23:10:01.239622   28839 main.go:141] libmachine: (ha-044175)     <type>hvm</type>
	I0805 23:10:01.239633   28839 main.go:141] libmachine: (ha-044175)     <boot dev='cdrom'/>
	I0805 23:10:01.239643   28839 main.go:141] libmachine: (ha-044175)     <boot dev='hd'/>
	I0805 23:10:01.239665   28839 main.go:141] libmachine: (ha-044175)     <bootmenu enable='no'/>
	I0805 23:10:01.239677   28839 main.go:141] libmachine: (ha-044175)   </os>
	I0805 23:10:01.239683   28839 main.go:141] libmachine: (ha-044175)   <devices>
	I0805 23:10:01.239690   28839 main.go:141] libmachine: (ha-044175)     <disk type='file' device='cdrom'>
	I0805 23:10:01.239700   28839 main.go:141] libmachine: (ha-044175)       <source file='/home/jenkins/minikube-integration/19373-9606/.minikube/machines/ha-044175/boot2docker.iso'/>
	I0805 23:10:01.239705   28839 main.go:141] libmachine: (ha-044175)       <target dev='hdc' bus='scsi'/>
	I0805 23:10:01.239712   28839 main.go:141] libmachine: (ha-044175)       <readonly/>
	I0805 23:10:01.239717   28839 main.go:141] libmachine: (ha-044175)     </disk>
	I0805 23:10:01.239725   28839 main.go:141] libmachine: (ha-044175)     <disk type='file' device='disk'>
	I0805 23:10:01.239731   28839 main.go:141] libmachine: (ha-044175)       <driver name='qemu' type='raw' cache='default' io='threads' />
	I0805 23:10:01.239741   28839 main.go:141] libmachine: (ha-044175)       <source file='/home/jenkins/minikube-integration/19373-9606/.minikube/machines/ha-044175/ha-044175.rawdisk'/>
	I0805 23:10:01.239748   28839 main.go:141] libmachine: (ha-044175)       <target dev='hda' bus='virtio'/>
	I0805 23:10:01.239753   28839 main.go:141] libmachine: (ha-044175)     </disk>
	I0805 23:10:01.239760   28839 main.go:141] libmachine: (ha-044175)     <interface type='network'>
	I0805 23:10:01.239765   28839 main.go:141] libmachine: (ha-044175)       <source network='mk-ha-044175'/>
	I0805 23:10:01.239772   28839 main.go:141] libmachine: (ha-044175)       <model type='virtio'/>
	I0805 23:10:01.239790   28839 main.go:141] libmachine: (ha-044175)     </interface>
	I0805 23:10:01.239808   28839 main.go:141] libmachine: (ha-044175)     <interface type='network'>
	I0805 23:10:01.239815   28839 main.go:141] libmachine: (ha-044175)       <source network='default'/>
	I0805 23:10:01.239824   28839 main.go:141] libmachine: (ha-044175)       <model type='virtio'/>
	I0805 23:10:01.239832   28839 main.go:141] libmachine: (ha-044175)     </interface>
	I0805 23:10:01.239837   28839 main.go:141] libmachine: (ha-044175)     <serial type='pty'>
	I0805 23:10:01.239844   28839 main.go:141] libmachine: (ha-044175)       <target port='0'/>
	I0805 23:10:01.239848   28839 main.go:141] libmachine: (ha-044175)     </serial>
	I0805 23:10:01.239853   28839 main.go:141] libmachine: (ha-044175)     <console type='pty'>
	I0805 23:10:01.239858   28839 main.go:141] libmachine: (ha-044175)       <target type='serial' port='0'/>
	I0805 23:10:01.239871   28839 main.go:141] libmachine: (ha-044175)     </console>
	I0805 23:10:01.239878   28839 main.go:141] libmachine: (ha-044175)     <rng model='virtio'>
	I0805 23:10:01.239884   28839 main.go:141] libmachine: (ha-044175)       <backend model='random'>/dev/random</backend>
	I0805 23:10:01.239890   28839 main.go:141] libmachine: (ha-044175)     </rng>
	I0805 23:10:01.239895   28839 main.go:141] libmachine: (ha-044175)     
	I0805 23:10:01.239901   28839 main.go:141] libmachine: (ha-044175)     
	I0805 23:10:01.239907   28839 main.go:141] libmachine: (ha-044175)   </devices>
	I0805 23:10:01.239919   28839 main.go:141] libmachine: (ha-044175) </domain>
	I0805 23:10:01.239929   28839 main.go:141] libmachine: (ha-044175) 
	I0805 23:10:01.244433   28839 main.go:141] libmachine: (ha-044175) DBG | domain ha-044175 has defined MAC address 52:54:00:f9:9f:76 in network default
	I0805 23:10:01.245052   28839 main.go:141] libmachine: (ha-044175) Ensuring networks are active...
	I0805 23:10:01.245083   28839 main.go:141] libmachine: (ha-044175) DBG | domain ha-044175 has defined MAC address 52:54:00:d0:5f:e4 in network mk-ha-044175
	I0805 23:10:01.245895   28839 main.go:141] libmachine: (ha-044175) Ensuring network default is active
	I0805 23:10:01.246314   28839 main.go:141] libmachine: (ha-044175) Ensuring network mk-ha-044175 is active
	I0805 23:10:01.246952   28839 main.go:141] libmachine: (ha-044175) Getting domain xml...
	I0805 23:10:01.247686   28839 main.go:141] libmachine: (ha-044175) Creating domain...
	I0805 23:10:02.446914   28839 main.go:141] libmachine: (ha-044175) Waiting to get IP...
	I0805 23:10:02.447670   28839 main.go:141] libmachine: (ha-044175) DBG | domain ha-044175 has defined MAC address 52:54:00:d0:5f:e4 in network mk-ha-044175
	I0805 23:10:02.448142   28839 main.go:141] libmachine: (ha-044175) DBG | unable to find current IP address of domain ha-044175 in network mk-ha-044175
	I0805 23:10:02.448213   28839 main.go:141] libmachine: (ha-044175) DBG | I0805 23:10:02.448124   28863 retry.go:31] will retry after 191.25034ms: waiting for machine to come up
	I0805 23:10:02.640708   28839 main.go:141] libmachine: (ha-044175) DBG | domain ha-044175 has defined MAC address 52:54:00:d0:5f:e4 in network mk-ha-044175
	I0805 23:10:02.641197   28839 main.go:141] libmachine: (ha-044175) DBG | unable to find current IP address of domain ha-044175 in network mk-ha-044175
	I0805 23:10:02.641237   28839 main.go:141] libmachine: (ha-044175) DBG | I0805 23:10:02.641141   28863 retry.go:31] will retry after 358.499245ms: waiting for machine to come up
	I0805 23:10:03.004458   28839 main.go:141] libmachine: (ha-044175) DBG | domain ha-044175 has defined MAC address 52:54:00:d0:5f:e4 in network mk-ha-044175
	I0805 23:10:03.004821   28839 main.go:141] libmachine: (ha-044175) DBG | unable to find current IP address of domain ha-044175 in network mk-ha-044175
	I0805 23:10:03.004846   28839 main.go:141] libmachine: (ha-044175) DBG | I0805 23:10:03.004783   28863 retry.go:31] will retry after 364.580201ms: waiting for machine to come up
	I0805 23:10:03.371523   28839 main.go:141] libmachine: (ha-044175) DBG | domain ha-044175 has defined MAC address 52:54:00:d0:5f:e4 in network mk-ha-044175
	I0805 23:10:03.371897   28839 main.go:141] libmachine: (ha-044175) DBG | unable to find current IP address of domain ha-044175 in network mk-ha-044175
	I0805 23:10:03.371917   28839 main.go:141] libmachine: (ha-044175) DBG | I0805 23:10:03.371855   28863 retry.go:31] will retry after 419.904223ms: waiting for machine to come up
	I0805 23:10:03.793500   28839 main.go:141] libmachine: (ha-044175) DBG | domain ha-044175 has defined MAC address 52:54:00:d0:5f:e4 in network mk-ha-044175
	I0805 23:10:03.793884   28839 main.go:141] libmachine: (ha-044175) DBG | unable to find current IP address of domain ha-044175 in network mk-ha-044175
	I0805 23:10:03.793911   28839 main.go:141] libmachine: (ha-044175) DBG | I0805 23:10:03.793826   28863 retry.go:31] will retry after 491.37058ms: waiting for machine to come up
	I0805 23:10:04.286536   28839 main.go:141] libmachine: (ha-044175) DBG | domain ha-044175 has defined MAC address 52:54:00:d0:5f:e4 in network mk-ha-044175
	I0805 23:10:04.286776   28839 main.go:141] libmachine: (ha-044175) DBG | unable to find current IP address of domain ha-044175 in network mk-ha-044175
	I0805 23:10:04.286797   28839 main.go:141] libmachine: (ha-044175) DBG | I0805 23:10:04.286748   28863 retry.go:31] will retry after 888.681799ms: waiting for machine to come up
	I0805 23:10:05.176785   28839 main.go:141] libmachine: (ha-044175) DBG | domain ha-044175 has defined MAC address 52:54:00:d0:5f:e4 in network mk-ha-044175
	I0805 23:10:05.177203   28839 main.go:141] libmachine: (ha-044175) DBG | unable to find current IP address of domain ha-044175 in network mk-ha-044175
	I0805 23:10:05.177246   28839 main.go:141] libmachine: (ha-044175) DBG | I0805 23:10:05.177143   28863 retry.go:31] will retry after 1.004077925s: waiting for machine to come up
	I0805 23:10:06.183184   28839 main.go:141] libmachine: (ha-044175) DBG | domain ha-044175 has defined MAC address 52:54:00:d0:5f:e4 in network mk-ha-044175
	I0805 23:10:06.183601   28839 main.go:141] libmachine: (ha-044175) DBG | unable to find current IP address of domain ha-044175 in network mk-ha-044175
	I0805 23:10:06.183634   28839 main.go:141] libmachine: (ha-044175) DBG | I0805 23:10:06.183560   28863 retry.go:31] will retry after 904.086074ms: waiting for machine to come up
	I0805 23:10:07.089719   28839 main.go:141] libmachine: (ha-044175) DBG | domain ha-044175 has defined MAC address 52:54:00:d0:5f:e4 in network mk-ha-044175
	I0805 23:10:07.090237   28839 main.go:141] libmachine: (ha-044175) DBG | unable to find current IP address of domain ha-044175 in network mk-ha-044175
	I0805 23:10:07.090302   28839 main.go:141] libmachine: (ha-044175) DBG | I0805 23:10:07.090183   28863 retry.go:31] will retry after 1.512955902s: waiting for machine to come up
	I0805 23:10:08.605148   28839 main.go:141] libmachine: (ha-044175) DBG | domain ha-044175 has defined MAC address 52:54:00:d0:5f:e4 in network mk-ha-044175
	I0805 23:10:08.605542   28839 main.go:141] libmachine: (ha-044175) DBG | unable to find current IP address of domain ha-044175 in network mk-ha-044175
	I0805 23:10:08.605567   28839 main.go:141] libmachine: (ha-044175) DBG | I0805 23:10:08.605496   28863 retry.go:31] will retry after 2.282337689s: waiting for machine to come up
	I0805 23:10:10.890002   28839 main.go:141] libmachine: (ha-044175) DBG | domain ha-044175 has defined MAC address 52:54:00:d0:5f:e4 in network mk-ha-044175
	I0805 23:10:10.890445   28839 main.go:141] libmachine: (ha-044175) DBG | unable to find current IP address of domain ha-044175 in network mk-ha-044175
	I0805 23:10:10.890465   28839 main.go:141] libmachine: (ha-044175) DBG | I0805 23:10:10.890401   28863 retry.go:31] will retry after 2.554606146s: waiting for machine to come up
	I0805 23:10:13.448689   28839 main.go:141] libmachine: (ha-044175) DBG | domain ha-044175 has defined MAC address 52:54:00:d0:5f:e4 in network mk-ha-044175
	I0805 23:10:13.449556   28839 main.go:141] libmachine: (ha-044175) DBG | unable to find current IP address of domain ha-044175 in network mk-ha-044175
	I0805 23:10:13.449596   28839 main.go:141] libmachine: (ha-044175) DBG | I0805 23:10:13.449510   28863 retry.go:31] will retry after 2.866219855s: waiting for machine to come up
	I0805 23:10:16.316858   28839 main.go:141] libmachine: (ha-044175) DBG | domain ha-044175 has defined MAC address 52:54:00:d0:5f:e4 in network mk-ha-044175
	I0805 23:10:16.317305   28839 main.go:141] libmachine: (ha-044175) DBG | unable to find current IP address of domain ha-044175 in network mk-ha-044175
	I0805 23:10:16.317323   28839 main.go:141] libmachine: (ha-044175) DBG | I0805 23:10:16.317274   28863 retry.go:31] will retry after 3.484103482s: waiting for machine to come up
	I0805 23:10:19.805811   28839 main.go:141] libmachine: (ha-044175) DBG | domain ha-044175 has defined MAC address 52:54:00:d0:5f:e4 in network mk-ha-044175
	I0805 23:10:19.806296   28839 main.go:141] libmachine: (ha-044175) DBG | unable to find current IP address of domain ha-044175 in network mk-ha-044175
	I0805 23:10:19.806325   28839 main.go:141] libmachine: (ha-044175) DBG | I0805 23:10:19.806243   28863 retry.go:31] will retry after 5.133269507s: waiting for machine to come up
	I0805 23:10:24.944435   28839 main.go:141] libmachine: (ha-044175) DBG | domain ha-044175 has defined MAC address 52:54:00:d0:5f:e4 in network mk-ha-044175
	I0805 23:10:24.944843   28839 main.go:141] libmachine: (ha-044175) Found IP for machine: 192.168.39.57
	I0805 23:10:24.944880   28839 main.go:141] libmachine: (ha-044175) Reserving static IP address...
	I0805 23:10:24.944896   28839 main.go:141] libmachine: (ha-044175) DBG | domain ha-044175 has current primary IP address 192.168.39.57 and MAC address 52:54:00:d0:5f:e4 in network mk-ha-044175
	I0805 23:10:24.945267   28839 main.go:141] libmachine: (ha-044175) DBG | unable to find host DHCP lease matching {name: "ha-044175", mac: "52:54:00:d0:5f:e4", ip: "192.168.39.57"} in network mk-ha-044175
	I0805 23:10:25.016183   28839 main.go:141] libmachine: (ha-044175) DBG | Getting to WaitForSSH function...
	I0805 23:10:25.016214   28839 main.go:141] libmachine: (ha-044175) Reserved static IP address: 192.168.39.57
	I0805 23:10:25.016226   28839 main.go:141] libmachine: (ha-044175) Waiting for SSH to be available...
	I0805 23:10:25.019000   28839 main.go:141] libmachine: (ha-044175) DBG | domain ha-044175 has defined MAC address 52:54:00:d0:5f:e4 in network mk-ha-044175
	I0805 23:10:25.019572   28839 main.go:141] libmachine: (ha-044175) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d0:5f:e4", ip: ""} in network mk-ha-044175: {Iface:virbr1 ExpiryTime:2024-08-06 00:10:15 +0000 UTC Type:0 Mac:52:54:00:d0:5f:e4 Iaid: IPaddr:192.168.39.57 Prefix:24 Hostname:minikube Clientid:01:52:54:00:d0:5f:e4}
	I0805 23:10:25.019599   28839 main.go:141] libmachine: (ha-044175) DBG | domain ha-044175 has defined IP address 192.168.39.57 and MAC address 52:54:00:d0:5f:e4 in network mk-ha-044175
	I0805 23:10:25.019766   28839 main.go:141] libmachine: (ha-044175) DBG | Using SSH client type: external
	I0805 23:10:25.019793   28839 main.go:141] libmachine: (ha-044175) DBG | Using SSH private key: /home/jenkins/minikube-integration/19373-9606/.minikube/machines/ha-044175/id_rsa (-rw-------)
	I0805 23:10:25.019832   28839 main.go:141] libmachine: (ha-044175) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.57 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19373-9606/.minikube/machines/ha-044175/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0805 23:10:25.019845   28839 main.go:141] libmachine: (ha-044175) DBG | About to run SSH command:
	I0805 23:10:25.019859   28839 main.go:141] libmachine: (ha-044175) DBG | exit 0
	I0805 23:10:25.143315   28839 main.go:141] libmachine: (ha-044175) DBG | SSH cmd err, output: <nil>: 
	I0805 23:10:25.143539   28839 main.go:141] libmachine: (ha-044175) KVM machine creation complete!
	I0805 23:10:25.143959   28839 main.go:141] libmachine: (ha-044175) Calling .GetConfigRaw
	I0805 23:10:25.144482   28839 main.go:141] libmachine: (ha-044175) Calling .DriverName
	I0805 23:10:25.144705   28839 main.go:141] libmachine: (ha-044175) Calling .DriverName
	I0805 23:10:25.144885   28839 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I0805 23:10:25.144901   28839 main.go:141] libmachine: (ha-044175) Calling .GetState
	I0805 23:10:25.146441   28839 main.go:141] libmachine: Detecting operating system of created instance...
	I0805 23:10:25.146455   28839 main.go:141] libmachine: Waiting for SSH to be available...
	I0805 23:10:25.146461   28839 main.go:141] libmachine: Getting to WaitForSSH function...
	I0805 23:10:25.146467   28839 main.go:141] libmachine: (ha-044175) Calling .GetSSHHostname
	I0805 23:10:25.148554   28839 main.go:141] libmachine: (ha-044175) DBG | domain ha-044175 has defined MAC address 52:54:00:d0:5f:e4 in network mk-ha-044175
	I0805 23:10:25.148915   28839 main.go:141] libmachine: (ha-044175) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d0:5f:e4", ip: ""} in network mk-ha-044175: {Iface:virbr1 ExpiryTime:2024-08-06 00:10:15 +0000 UTC Type:0 Mac:52:54:00:d0:5f:e4 Iaid: IPaddr:192.168.39.57 Prefix:24 Hostname:ha-044175 Clientid:01:52:54:00:d0:5f:e4}
	I0805 23:10:25.148929   28839 main.go:141] libmachine: (ha-044175) DBG | domain ha-044175 has defined IP address 192.168.39.57 and MAC address 52:54:00:d0:5f:e4 in network mk-ha-044175
	I0805 23:10:25.149036   28839 main.go:141] libmachine: (ha-044175) Calling .GetSSHPort
	I0805 23:10:25.149207   28839 main.go:141] libmachine: (ha-044175) Calling .GetSSHKeyPath
	I0805 23:10:25.149378   28839 main.go:141] libmachine: (ha-044175) Calling .GetSSHKeyPath
	I0805 23:10:25.149585   28839 main.go:141] libmachine: (ha-044175) Calling .GetSSHUsername
	I0805 23:10:25.149764   28839 main.go:141] libmachine: Using SSH client type: native
	I0805 23:10:25.149951   28839 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.57 22 <nil> <nil>}
	I0805 23:10:25.149960   28839 main.go:141] libmachine: About to run SSH command:
	exit 0
	I0805 23:10:25.250816   28839 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0805 23:10:25.250836   28839 main.go:141] libmachine: Detecting the provisioner...
	I0805 23:10:25.250843   28839 main.go:141] libmachine: (ha-044175) Calling .GetSSHHostname
	I0805 23:10:25.253727   28839 main.go:141] libmachine: (ha-044175) DBG | domain ha-044175 has defined MAC address 52:54:00:d0:5f:e4 in network mk-ha-044175
	I0805 23:10:25.254273   28839 main.go:141] libmachine: (ha-044175) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d0:5f:e4", ip: ""} in network mk-ha-044175: {Iface:virbr1 ExpiryTime:2024-08-06 00:10:15 +0000 UTC Type:0 Mac:52:54:00:d0:5f:e4 Iaid: IPaddr:192.168.39.57 Prefix:24 Hostname:ha-044175 Clientid:01:52:54:00:d0:5f:e4}
	I0805 23:10:25.254299   28839 main.go:141] libmachine: (ha-044175) DBG | domain ha-044175 has defined IP address 192.168.39.57 and MAC address 52:54:00:d0:5f:e4 in network mk-ha-044175
	I0805 23:10:25.254494   28839 main.go:141] libmachine: (ha-044175) Calling .GetSSHPort
	I0805 23:10:25.254653   28839 main.go:141] libmachine: (ha-044175) Calling .GetSSHKeyPath
	I0805 23:10:25.254784   28839 main.go:141] libmachine: (ha-044175) Calling .GetSSHKeyPath
	I0805 23:10:25.254940   28839 main.go:141] libmachine: (ha-044175) Calling .GetSSHUsername
	I0805 23:10:25.255135   28839 main.go:141] libmachine: Using SSH client type: native
	I0805 23:10:25.255318   28839 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.57 22 <nil> <nil>}
	I0805 23:10:25.255329   28839 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I0805 23:10:25.356273   28839 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2023.02.9-dirty
	ID=buildroot
	VERSION_ID=2023.02.9
	PRETTY_NAME="Buildroot 2023.02.9"
	
	I0805 23:10:25.356330   28839 main.go:141] libmachine: found compatible host: buildroot
	I0805 23:10:25.356337   28839 main.go:141] libmachine: Provisioning with buildroot...
	I0805 23:10:25.356346   28839 main.go:141] libmachine: (ha-044175) Calling .GetMachineName
	I0805 23:10:25.356584   28839 buildroot.go:166] provisioning hostname "ha-044175"
	I0805 23:10:25.356609   28839 main.go:141] libmachine: (ha-044175) Calling .GetMachineName
	I0805 23:10:25.356805   28839 main.go:141] libmachine: (ha-044175) Calling .GetSSHHostname
	I0805 23:10:25.359179   28839 main.go:141] libmachine: (ha-044175) DBG | domain ha-044175 has defined MAC address 52:54:00:d0:5f:e4 in network mk-ha-044175
	I0805 23:10:25.359576   28839 main.go:141] libmachine: (ha-044175) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d0:5f:e4", ip: ""} in network mk-ha-044175: {Iface:virbr1 ExpiryTime:2024-08-06 00:10:15 +0000 UTC Type:0 Mac:52:54:00:d0:5f:e4 Iaid: IPaddr:192.168.39.57 Prefix:24 Hostname:ha-044175 Clientid:01:52:54:00:d0:5f:e4}
	I0805 23:10:25.359608   28839 main.go:141] libmachine: (ha-044175) DBG | domain ha-044175 has defined IP address 192.168.39.57 and MAC address 52:54:00:d0:5f:e4 in network mk-ha-044175
	I0805 23:10:25.359785   28839 main.go:141] libmachine: (ha-044175) Calling .GetSSHPort
	I0805 23:10:25.359980   28839 main.go:141] libmachine: (ha-044175) Calling .GetSSHKeyPath
	I0805 23:10:25.360142   28839 main.go:141] libmachine: (ha-044175) Calling .GetSSHKeyPath
	I0805 23:10:25.360309   28839 main.go:141] libmachine: (ha-044175) Calling .GetSSHUsername
	I0805 23:10:25.360518   28839 main.go:141] libmachine: Using SSH client type: native
	I0805 23:10:25.360717   28839 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.57 22 <nil> <nil>}
	I0805 23:10:25.360730   28839 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-044175 && echo "ha-044175" | sudo tee /etc/hostname
	I0805 23:10:25.472972   28839 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-044175
	
	I0805 23:10:25.473002   28839 main.go:141] libmachine: (ha-044175) Calling .GetSSHHostname
	I0805 23:10:25.476342   28839 main.go:141] libmachine: (ha-044175) DBG | domain ha-044175 has defined MAC address 52:54:00:d0:5f:e4 in network mk-ha-044175
	I0805 23:10:25.476698   28839 main.go:141] libmachine: (ha-044175) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d0:5f:e4", ip: ""} in network mk-ha-044175: {Iface:virbr1 ExpiryTime:2024-08-06 00:10:15 +0000 UTC Type:0 Mac:52:54:00:d0:5f:e4 Iaid: IPaddr:192.168.39.57 Prefix:24 Hostname:ha-044175 Clientid:01:52:54:00:d0:5f:e4}
	I0805 23:10:25.476727   28839 main.go:141] libmachine: (ha-044175) DBG | domain ha-044175 has defined IP address 192.168.39.57 and MAC address 52:54:00:d0:5f:e4 in network mk-ha-044175
	I0805 23:10:25.476864   28839 main.go:141] libmachine: (ha-044175) Calling .GetSSHPort
	I0805 23:10:25.477054   28839 main.go:141] libmachine: (ha-044175) Calling .GetSSHKeyPath
	I0805 23:10:25.477222   28839 main.go:141] libmachine: (ha-044175) Calling .GetSSHKeyPath
	I0805 23:10:25.477369   28839 main.go:141] libmachine: (ha-044175) Calling .GetSSHUsername
	I0805 23:10:25.477485   28839 main.go:141] libmachine: Using SSH client type: native
	I0805 23:10:25.477637   28839 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.57 22 <nil> <nil>}
	I0805 23:10:25.477651   28839 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-044175' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-044175/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-044175' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0805 23:10:25.584203   28839 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0805 23:10:25.584230   28839 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19373-9606/.minikube CaCertPath:/home/jenkins/minikube-integration/19373-9606/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19373-9606/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19373-9606/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19373-9606/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19373-9606/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19373-9606/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19373-9606/.minikube}
	I0805 23:10:25.584275   28839 buildroot.go:174] setting up certificates
	I0805 23:10:25.584292   28839 provision.go:84] configureAuth start
	I0805 23:10:25.584303   28839 main.go:141] libmachine: (ha-044175) Calling .GetMachineName
	I0805 23:10:25.584581   28839 main.go:141] libmachine: (ha-044175) Calling .GetIP
	I0805 23:10:25.587629   28839 main.go:141] libmachine: (ha-044175) DBG | domain ha-044175 has defined MAC address 52:54:00:d0:5f:e4 in network mk-ha-044175
	I0805 23:10:25.587949   28839 main.go:141] libmachine: (ha-044175) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d0:5f:e4", ip: ""} in network mk-ha-044175: {Iface:virbr1 ExpiryTime:2024-08-06 00:10:15 +0000 UTC Type:0 Mac:52:54:00:d0:5f:e4 Iaid: IPaddr:192.168.39.57 Prefix:24 Hostname:ha-044175 Clientid:01:52:54:00:d0:5f:e4}
	I0805 23:10:25.587975   28839 main.go:141] libmachine: (ha-044175) DBG | domain ha-044175 has defined IP address 192.168.39.57 and MAC address 52:54:00:d0:5f:e4 in network mk-ha-044175
	I0805 23:10:25.588124   28839 main.go:141] libmachine: (ha-044175) Calling .GetSSHHostname
	I0805 23:10:25.590515   28839 main.go:141] libmachine: (ha-044175) DBG | domain ha-044175 has defined MAC address 52:54:00:d0:5f:e4 in network mk-ha-044175
	I0805 23:10:25.590885   28839 main.go:141] libmachine: (ha-044175) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d0:5f:e4", ip: ""} in network mk-ha-044175: {Iface:virbr1 ExpiryTime:2024-08-06 00:10:15 +0000 UTC Type:0 Mac:52:54:00:d0:5f:e4 Iaid: IPaddr:192.168.39.57 Prefix:24 Hostname:ha-044175 Clientid:01:52:54:00:d0:5f:e4}
	I0805 23:10:25.590916   28839 main.go:141] libmachine: (ha-044175) DBG | domain ha-044175 has defined IP address 192.168.39.57 and MAC address 52:54:00:d0:5f:e4 in network mk-ha-044175
	I0805 23:10:25.591034   28839 provision.go:143] copyHostCerts
	I0805 23:10:25.591089   28839 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19373-9606/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/19373-9606/.minikube/cert.pem
	I0805 23:10:25.591138   28839 exec_runner.go:144] found /home/jenkins/minikube-integration/19373-9606/.minikube/cert.pem, removing ...
	I0805 23:10:25.591146   28839 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19373-9606/.minikube/cert.pem
	I0805 23:10:25.591209   28839 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19373-9606/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19373-9606/.minikube/cert.pem (1123 bytes)
	I0805 23:10:25.591315   28839 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19373-9606/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/19373-9606/.minikube/key.pem
	I0805 23:10:25.591347   28839 exec_runner.go:144] found /home/jenkins/minikube-integration/19373-9606/.minikube/key.pem, removing ...
	I0805 23:10:25.591355   28839 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19373-9606/.minikube/key.pem
	I0805 23:10:25.591390   28839 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19373-9606/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19373-9606/.minikube/key.pem (1679 bytes)
	I0805 23:10:25.591461   28839 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19373-9606/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/19373-9606/.minikube/ca.pem
	I0805 23:10:25.591487   28839 exec_runner.go:144] found /home/jenkins/minikube-integration/19373-9606/.minikube/ca.pem, removing ...
	I0805 23:10:25.591496   28839 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19373-9606/.minikube/ca.pem
	I0805 23:10:25.591527   28839 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19373-9606/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19373-9606/.minikube/ca.pem (1082 bytes)
	I0805 23:10:25.591601   28839 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19373-9606/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19373-9606/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19373-9606/.minikube/certs/ca-key.pem org=jenkins.ha-044175 san=[127.0.0.1 192.168.39.57 ha-044175 localhost minikube]
	I0805 23:10:25.760201   28839 provision.go:177] copyRemoteCerts
	I0805 23:10:25.760257   28839 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0805 23:10:25.760278   28839 main.go:141] libmachine: (ha-044175) Calling .GetSSHHostname
	I0805 23:10:25.763102   28839 main.go:141] libmachine: (ha-044175) DBG | domain ha-044175 has defined MAC address 52:54:00:d0:5f:e4 in network mk-ha-044175
	I0805 23:10:25.763598   28839 main.go:141] libmachine: (ha-044175) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d0:5f:e4", ip: ""} in network mk-ha-044175: {Iface:virbr1 ExpiryTime:2024-08-06 00:10:15 +0000 UTC Type:0 Mac:52:54:00:d0:5f:e4 Iaid: IPaddr:192.168.39.57 Prefix:24 Hostname:ha-044175 Clientid:01:52:54:00:d0:5f:e4}
	I0805 23:10:25.763631   28839 main.go:141] libmachine: (ha-044175) DBG | domain ha-044175 has defined IP address 192.168.39.57 and MAC address 52:54:00:d0:5f:e4 in network mk-ha-044175
	I0805 23:10:25.763880   28839 main.go:141] libmachine: (ha-044175) Calling .GetSSHPort
	I0805 23:10:25.764062   28839 main.go:141] libmachine: (ha-044175) Calling .GetSSHKeyPath
	I0805 23:10:25.764219   28839 main.go:141] libmachine: (ha-044175) Calling .GetSSHUsername
	I0805 23:10:25.764418   28839 sshutil.go:53] new ssh client: &{IP:192.168.39.57 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19373-9606/.minikube/machines/ha-044175/id_rsa Username:docker}
	I0805 23:10:25.845623   28839 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19373-9606/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0805 23:10:25.845698   28839 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19373-9606/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0805 23:10:25.870727   28839 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19373-9606/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0805 23:10:25.870805   28839 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19373-9606/.minikube/machines/server.pem --> /etc/docker/server.pem (1200 bytes)
	I0805 23:10:25.896864   28839 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19373-9606/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0805 23:10:25.896954   28839 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19373-9606/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0805 23:10:25.921692   28839 provision.go:87] duration metric: took 337.38411ms to configureAuth
	I0805 23:10:25.921725   28839 buildroot.go:189] setting minikube options for container-runtime
	I0805 23:10:25.921953   28839 config.go:182] Loaded profile config "ha-044175": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0805 23:10:25.922062   28839 main.go:141] libmachine: (ha-044175) Calling .GetSSHHostname
	I0805 23:10:25.924817   28839 main.go:141] libmachine: (ha-044175) DBG | domain ha-044175 has defined MAC address 52:54:00:d0:5f:e4 in network mk-ha-044175
	I0805 23:10:25.925226   28839 main.go:141] libmachine: (ha-044175) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d0:5f:e4", ip: ""} in network mk-ha-044175: {Iface:virbr1 ExpiryTime:2024-08-06 00:10:15 +0000 UTC Type:0 Mac:52:54:00:d0:5f:e4 Iaid: IPaddr:192.168.39.57 Prefix:24 Hostname:ha-044175 Clientid:01:52:54:00:d0:5f:e4}
	I0805 23:10:25.925247   28839 main.go:141] libmachine: (ha-044175) DBG | domain ha-044175 has defined IP address 192.168.39.57 and MAC address 52:54:00:d0:5f:e4 in network mk-ha-044175
	I0805 23:10:25.925409   28839 main.go:141] libmachine: (ha-044175) Calling .GetSSHPort
	I0805 23:10:25.925595   28839 main.go:141] libmachine: (ha-044175) Calling .GetSSHKeyPath
	I0805 23:10:25.925801   28839 main.go:141] libmachine: (ha-044175) Calling .GetSSHKeyPath
	I0805 23:10:25.925957   28839 main.go:141] libmachine: (ha-044175) Calling .GetSSHUsername
	I0805 23:10:25.926139   28839 main.go:141] libmachine: Using SSH client type: native
	I0805 23:10:25.926290   28839 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.57 22 <nil> <nil>}
	I0805 23:10:25.926303   28839 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0805 23:10:26.213499   28839 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0805 23:10:26.213524   28839 main.go:141] libmachine: Checking connection to Docker...
	I0805 23:10:26.213555   28839 main.go:141] libmachine: (ha-044175) Calling .GetURL
	I0805 23:10:26.214928   28839 main.go:141] libmachine: (ha-044175) DBG | Using libvirt version 6000000
	I0805 23:10:26.217217   28839 main.go:141] libmachine: (ha-044175) DBG | domain ha-044175 has defined MAC address 52:54:00:d0:5f:e4 in network mk-ha-044175
	I0805 23:10:26.217551   28839 main.go:141] libmachine: (ha-044175) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d0:5f:e4", ip: ""} in network mk-ha-044175: {Iface:virbr1 ExpiryTime:2024-08-06 00:10:15 +0000 UTC Type:0 Mac:52:54:00:d0:5f:e4 Iaid: IPaddr:192.168.39.57 Prefix:24 Hostname:ha-044175 Clientid:01:52:54:00:d0:5f:e4}
	I0805 23:10:26.217574   28839 main.go:141] libmachine: (ha-044175) DBG | domain ha-044175 has defined IP address 192.168.39.57 and MAC address 52:54:00:d0:5f:e4 in network mk-ha-044175
	I0805 23:10:26.217740   28839 main.go:141] libmachine: Docker is up and running!
	I0805 23:10:26.217774   28839 main.go:141] libmachine: Reticulating splines...
	I0805 23:10:26.217782   28839 client.go:171] duration metric: took 25.40805915s to LocalClient.Create
	I0805 23:10:26.217809   28839 start.go:167] duration metric: took 25.408121999s to libmachine.API.Create "ha-044175"
	I0805 23:10:26.217820   28839 start.go:293] postStartSetup for "ha-044175" (driver="kvm2")
	I0805 23:10:26.217834   28839 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0805 23:10:26.217856   28839 main.go:141] libmachine: (ha-044175) Calling .DriverName
	I0805 23:10:26.218087   28839 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0805 23:10:26.218135   28839 main.go:141] libmachine: (ha-044175) Calling .GetSSHHostname
	I0805 23:10:26.220117   28839 main.go:141] libmachine: (ha-044175) DBG | domain ha-044175 has defined MAC address 52:54:00:d0:5f:e4 in network mk-ha-044175
	I0805 23:10:26.220430   28839 main.go:141] libmachine: (ha-044175) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d0:5f:e4", ip: ""} in network mk-ha-044175: {Iface:virbr1 ExpiryTime:2024-08-06 00:10:15 +0000 UTC Type:0 Mac:52:54:00:d0:5f:e4 Iaid: IPaddr:192.168.39.57 Prefix:24 Hostname:ha-044175 Clientid:01:52:54:00:d0:5f:e4}
	I0805 23:10:26.220452   28839 main.go:141] libmachine: (ha-044175) DBG | domain ha-044175 has defined IP address 192.168.39.57 and MAC address 52:54:00:d0:5f:e4 in network mk-ha-044175
	I0805 23:10:26.220567   28839 main.go:141] libmachine: (ha-044175) Calling .GetSSHPort
	I0805 23:10:26.220743   28839 main.go:141] libmachine: (ha-044175) Calling .GetSSHKeyPath
	I0805 23:10:26.220984   28839 main.go:141] libmachine: (ha-044175) Calling .GetSSHUsername
	I0805 23:10:26.221150   28839 sshutil.go:53] new ssh client: &{IP:192.168.39.57 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19373-9606/.minikube/machines/ha-044175/id_rsa Username:docker}
	I0805 23:10:26.302017   28839 ssh_runner.go:195] Run: cat /etc/os-release
	I0805 23:10:26.306495   28839 info.go:137] Remote host: Buildroot 2023.02.9
	I0805 23:10:26.306525   28839 filesync.go:126] Scanning /home/jenkins/minikube-integration/19373-9606/.minikube/addons for local assets ...
	I0805 23:10:26.306598   28839 filesync.go:126] Scanning /home/jenkins/minikube-integration/19373-9606/.minikube/files for local assets ...
	I0805 23:10:26.306688   28839 filesync.go:149] local asset: /home/jenkins/minikube-integration/19373-9606/.minikube/files/etc/ssl/certs/167922.pem -> 167922.pem in /etc/ssl/certs
	I0805 23:10:26.306700   28839 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19373-9606/.minikube/files/etc/ssl/certs/167922.pem -> /etc/ssl/certs/167922.pem
	I0805 23:10:26.306834   28839 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0805 23:10:26.316268   28839 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19373-9606/.minikube/files/etc/ssl/certs/167922.pem --> /etc/ssl/certs/167922.pem (1708 bytes)
	I0805 23:10:26.341081   28839 start.go:296] duration metric: took 123.248464ms for postStartSetup
	I0805 23:10:26.341131   28839 main.go:141] libmachine: (ha-044175) Calling .GetConfigRaw
	I0805 23:10:26.341711   28839 main.go:141] libmachine: (ha-044175) Calling .GetIP
	I0805 23:10:26.344242   28839 main.go:141] libmachine: (ha-044175) DBG | domain ha-044175 has defined MAC address 52:54:00:d0:5f:e4 in network mk-ha-044175
	I0805 23:10:26.344580   28839 main.go:141] libmachine: (ha-044175) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d0:5f:e4", ip: ""} in network mk-ha-044175: {Iface:virbr1 ExpiryTime:2024-08-06 00:10:15 +0000 UTC Type:0 Mac:52:54:00:d0:5f:e4 Iaid: IPaddr:192.168.39.57 Prefix:24 Hostname:ha-044175 Clientid:01:52:54:00:d0:5f:e4}
	I0805 23:10:26.344601   28839 main.go:141] libmachine: (ha-044175) DBG | domain ha-044175 has defined IP address 192.168.39.57 and MAC address 52:54:00:d0:5f:e4 in network mk-ha-044175
	I0805 23:10:26.344857   28839 profile.go:143] Saving config to /home/jenkins/minikube-integration/19373-9606/.minikube/profiles/ha-044175/config.json ...
	I0805 23:10:26.345045   28839 start.go:128] duration metric: took 25.554324128s to createHost
	I0805 23:10:26.345065   28839 main.go:141] libmachine: (ha-044175) Calling .GetSSHHostname
	I0805 23:10:26.347316   28839 main.go:141] libmachine: (ha-044175) DBG | domain ha-044175 has defined MAC address 52:54:00:d0:5f:e4 in network mk-ha-044175
	I0805 23:10:26.347742   28839 main.go:141] libmachine: (ha-044175) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d0:5f:e4", ip: ""} in network mk-ha-044175: {Iface:virbr1 ExpiryTime:2024-08-06 00:10:15 +0000 UTC Type:0 Mac:52:54:00:d0:5f:e4 Iaid: IPaddr:192.168.39.57 Prefix:24 Hostname:ha-044175 Clientid:01:52:54:00:d0:5f:e4}
	I0805 23:10:26.347773   28839 main.go:141] libmachine: (ha-044175) DBG | domain ha-044175 has defined IP address 192.168.39.57 and MAC address 52:54:00:d0:5f:e4 in network mk-ha-044175
	I0805 23:10:26.347926   28839 main.go:141] libmachine: (ha-044175) Calling .GetSSHPort
	I0805 23:10:26.348114   28839 main.go:141] libmachine: (ha-044175) Calling .GetSSHKeyPath
	I0805 23:10:26.348274   28839 main.go:141] libmachine: (ha-044175) Calling .GetSSHKeyPath
	I0805 23:10:26.348430   28839 main.go:141] libmachine: (ha-044175) Calling .GetSSHUsername
	I0805 23:10:26.348586   28839 main.go:141] libmachine: Using SSH client type: native
	I0805 23:10:26.348790   28839 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.57 22 <nil> <nil>}
	I0805 23:10:26.348845   28839 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0805 23:10:26.448259   28839 main.go:141] libmachine: SSH cmd err, output: <nil>: 1722899426.426191961
	
	I0805 23:10:26.448285   28839 fix.go:216] guest clock: 1722899426.426191961
	I0805 23:10:26.448293   28839 fix.go:229] Guest: 2024-08-05 23:10:26.426191961 +0000 UTC Remote: 2024-08-05 23:10:26.345055906 +0000 UTC m=+25.661044053 (delta=81.136055ms)
	I0805 23:10:26.448311   28839 fix.go:200] guest clock delta is within tolerance: 81.136055ms
	I0805 23:10:26.448316   28839 start.go:83] releasing machines lock for "ha-044175", held for 25.657662432s
	I0805 23:10:26.448332   28839 main.go:141] libmachine: (ha-044175) Calling .DriverName
	I0805 23:10:26.448607   28839 main.go:141] libmachine: (ha-044175) Calling .GetIP
	I0805 23:10:26.451550   28839 main.go:141] libmachine: (ha-044175) DBG | domain ha-044175 has defined MAC address 52:54:00:d0:5f:e4 in network mk-ha-044175
	I0805 23:10:26.451910   28839 main.go:141] libmachine: (ha-044175) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d0:5f:e4", ip: ""} in network mk-ha-044175: {Iface:virbr1 ExpiryTime:2024-08-06 00:10:15 +0000 UTC Type:0 Mac:52:54:00:d0:5f:e4 Iaid: IPaddr:192.168.39.57 Prefix:24 Hostname:ha-044175 Clientid:01:52:54:00:d0:5f:e4}
	I0805 23:10:26.451938   28839 main.go:141] libmachine: (ha-044175) DBG | domain ha-044175 has defined IP address 192.168.39.57 and MAC address 52:54:00:d0:5f:e4 in network mk-ha-044175
	I0805 23:10:26.452065   28839 main.go:141] libmachine: (ha-044175) Calling .DriverName
	I0805 23:10:26.452585   28839 main.go:141] libmachine: (ha-044175) Calling .DriverName
	I0805 23:10:26.452791   28839 main.go:141] libmachine: (ha-044175) Calling .DriverName
	I0805 23:10:26.452904   28839 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0805 23:10:26.452938   28839 main.go:141] libmachine: (ha-044175) Calling .GetSSHHostname
	I0805 23:10:26.453071   28839 ssh_runner.go:195] Run: cat /version.json
	I0805 23:10:26.453103   28839 main.go:141] libmachine: (ha-044175) Calling .GetSSHHostname
	I0805 23:10:26.455498   28839 main.go:141] libmachine: (ha-044175) DBG | domain ha-044175 has defined MAC address 52:54:00:d0:5f:e4 in network mk-ha-044175
	I0805 23:10:26.455823   28839 main.go:141] libmachine: (ha-044175) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d0:5f:e4", ip: ""} in network mk-ha-044175: {Iface:virbr1 ExpiryTime:2024-08-06 00:10:15 +0000 UTC Type:0 Mac:52:54:00:d0:5f:e4 Iaid: IPaddr:192.168.39.57 Prefix:24 Hostname:ha-044175 Clientid:01:52:54:00:d0:5f:e4}
	I0805 23:10:26.455850   28839 main.go:141] libmachine: (ha-044175) DBG | domain ha-044175 has defined IP address 192.168.39.57 and MAC address 52:54:00:d0:5f:e4 in network mk-ha-044175
	I0805 23:10:26.455869   28839 main.go:141] libmachine: (ha-044175) DBG | domain ha-044175 has defined MAC address 52:54:00:d0:5f:e4 in network mk-ha-044175
	I0805 23:10:26.456007   28839 main.go:141] libmachine: (ha-044175) Calling .GetSSHPort
	I0805 23:10:26.456262   28839 main.go:141] libmachine: (ha-044175) Calling .GetSSHKeyPath
	I0805 23:10:26.456307   28839 main.go:141] libmachine: (ha-044175) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d0:5f:e4", ip: ""} in network mk-ha-044175: {Iface:virbr1 ExpiryTime:2024-08-06 00:10:15 +0000 UTC Type:0 Mac:52:54:00:d0:5f:e4 Iaid: IPaddr:192.168.39.57 Prefix:24 Hostname:ha-044175 Clientid:01:52:54:00:d0:5f:e4}
	I0805 23:10:26.456327   28839 main.go:141] libmachine: (ha-044175) DBG | domain ha-044175 has defined IP address 192.168.39.57 and MAC address 52:54:00:d0:5f:e4 in network mk-ha-044175
	I0805 23:10:26.456417   28839 main.go:141] libmachine: (ha-044175) Calling .GetSSHUsername
	I0805 23:10:26.456486   28839 main.go:141] libmachine: (ha-044175) Calling .GetSSHPort
	I0805 23:10:26.456569   28839 sshutil.go:53] new ssh client: &{IP:192.168.39.57 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19373-9606/.minikube/machines/ha-044175/id_rsa Username:docker}
	I0805 23:10:26.456654   28839 main.go:141] libmachine: (ha-044175) Calling .GetSSHKeyPath
	I0805 23:10:26.456861   28839 main.go:141] libmachine: (ha-044175) Calling .GetSSHUsername
	I0805 23:10:26.457055   28839 sshutil.go:53] new ssh client: &{IP:192.168.39.57 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19373-9606/.minikube/machines/ha-044175/id_rsa Username:docker}
	I0805 23:10:26.532093   28839 ssh_runner.go:195] Run: systemctl --version
	I0805 23:10:26.552946   28839 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0805 23:10:26.717407   28839 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0805 23:10:26.723705   28839 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0805 23:10:26.723769   28839 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0805 23:10:26.740772   28839 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0805 23:10:26.740799   28839 start.go:495] detecting cgroup driver to use...
	I0805 23:10:26.740872   28839 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0805 23:10:26.757914   28839 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0805 23:10:26.771892   28839 docker.go:217] disabling cri-docker service (if available) ...
	I0805 23:10:26.771947   28839 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0805 23:10:26.786392   28839 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0805 23:10:26.800653   28839 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0805 23:10:26.912988   28839 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0805 23:10:27.052129   28839 docker.go:233] disabling docker service ...
	I0805 23:10:27.052196   28839 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0805 23:10:27.067392   28839 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0805 23:10:27.080774   28839 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0805 23:10:27.217830   28839 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0805 23:10:27.331931   28839 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0805 23:10:27.346720   28839 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0805 23:10:27.365742   28839 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I0805 23:10:27.365794   28839 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0805 23:10:27.377789   28839 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0805 23:10:27.377923   28839 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0805 23:10:27.390408   28839 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0805 23:10:27.401535   28839 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0805 23:10:27.412548   28839 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0805 23:10:27.423605   28839 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0805 23:10:27.434746   28839 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0805 23:10:27.452382   28839 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0805 23:10:27.463232   28839 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0805 23:10:27.472975   28839 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0805 23:10:27.473040   28839 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0805 23:10:27.487200   28839 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0805 23:10:27.497333   28839 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0805 23:10:27.605312   28839 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0805 23:10:27.745378   28839 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0805 23:10:27.745456   28839 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0805 23:10:27.750517   28839 start.go:563] Will wait 60s for crictl version
	I0805 23:10:27.750577   28839 ssh_runner.go:195] Run: which crictl
	I0805 23:10:27.754578   28839 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0805 23:10:27.790577   28839 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0805 23:10:27.790663   28839 ssh_runner.go:195] Run: crio --version
	I0805 23:10:27.819956   28839 ssh_runner.go:195] Run: crio --version
	I0805 23:10:27.850591   28839 out.go:177] * Preparing Kubernetes v1.30.3 on CRI-O 1.29.1 ...
	I0805 23:10:27.851744   28839 main.go:141] libmachine: (ha-044175) Calling .GetIP
	I0805 23:10:27.854702   28839 main.go:141] libmachine: (ha-044175) DBG | domain ha-044175 has defined MAC address 52:54:00:d0:5f:e4 in network mk-ha-044175
	I0805 23:10:27.855041   28839 main.go:141] libmachine: (ha-044175) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d0:5f:e4", ip: ""} in network mk-ha-044175: {Iface:virbr1 ExpiryTime:2024-08-06 00:10:15 +0000 UTC Type:0 Mac:52:54:00:d0:5f:e4 Iaid: IPaddr:192.168.39.57 Prefix:24 Hostname:ha-044175 Clientid:01:52:54:00:d0:5f:e4}
	I0805 23:10:27.855092   28839 main.go:141] libmachine: (ha-044175) DBG | domain ha-044175 has defined IP address 192.168.39.57 and MAC address 52:54:00:d0:5f:e4 in network mk-ha-044175
	I0805 23:10:27.855316   28839 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0805 23:10:27.859437   28839 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0805 23:10:27.872935   28839 kubeadm.go:883] updating cluster {Name:ha-044175 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19339/minikube-v1.33.1-1722248113-19339-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.3 Cl
usterName:ha-044175 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.57 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 Mo
untType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0805 23:10:27.873039   28839 preload.go:131] Checking if preload exists for k8s version v1.30.3 and runtime crio
	I0805 23:10:27.873108   28839 ssh_runner.go:195] Run: sudo crictl images --output json
	I0805 23:10:27.904355   28839 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.30.3". assuming images are not preloaded.
	I0805 23:10:27.904422   28839 ssh_runner.go:195] Run: which lz4
	I0805 23:10:27.908408   28839 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19373-9606/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-cri-o-overlay-amd64.tar.lz4 -> /preloaded.tar.lz4
	I0805 23:10:27.908486   28839 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I0805 23:10:27.912616   28839 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0805 23:10:27.912637   28839 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19373-9606/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (406200976 bytes)
	I0805 23:10:29.394828   28839 crio.go:462] duration metric: took 1.48636381s to copy over tarball
	I0805 23:10:29.394918   28839 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0805 23:10:31.572647   28839 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.177703625s)
	I0805 23:10:31.572670   28839 crio.go:469] duration metric: took 2.177818197s to extract the tarball
	I0805 23:10:31.572679   28839 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0805 23:10:31.610325   28839 ssh_runner.go:195] Run: sudo crictl images --output json
	I0805 23:10:31.658573   28839 crio.go:514] all images are preloaded for cri-o runtime.
	I0805 23:10:31.658597   28839 cache_images.go:84] Images are preloaded, skipping loading
	I0805 23:10:31.658608   28839 kubeadm.go:934] updating node { 192.168.39.57 8443 v1.30.3 crio true true} ...
	I0805 23:10:31.658727   28839 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.30.3/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=ha-044175 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.57
	
	[Install]
	 config:
	{KubernetesVersion:v1.30.3 ClusterName:ha-044175 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0805 23:10:31.658810   28839 ssh_runner.go:195] Run: crio config
	I0805 23:10:31.705783   28839 cni.go:84] Creating CNI manager for ""
	I0805 23:10:31.705807   28839 cni.go:136] multinode detected (1 nodes found), recommending kindnet
	I0805 23:10:31.705819   28839 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0805 23:10:31.705846   28839 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.57 APIServerPort:8443 KubernetesVersion:v1.30.3 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:ha-044175 NodeName:ha-044175 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.57"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.57 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernetes/m
anifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0805 23:10:31.706000   28839 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.57
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "ha-044175"
	  kubeletExtraArgs:
	    node-ip: 192.168.39.57
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.57"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.30.3
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0805 23:10:31.706026   28839 kube-vip.go:115] generating kube-vip config ...
	I0805 23:10:31.706074   28839 ssh_runner.go:195] Run: sudo sh -c "modprobe --all ip_vs ip_vs_rr ip_vs_wrr ip_vs_sh nf_conntrack"
	I0805 23:10:31.722986   28839 kube-vip.go:167] auto-enabling control-plane load-balancing in kube-vip
	I0805 23:10:31.723118   28839 kube-vip.go:137] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_nodename
	      valueFrom:
	        fieldRef:
	          fieldPath: spec.nodeName
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 192.168.39.254
	    - name: prometheus_server
	      value: :2112
	    - name : lb_enable
	      value: "true"
	    - name: lb_port
	      value: "8443"
	    image: ghcr.io/kube-vip/kube-vip:v0.8.0
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/super-admin.conf"
	    name: kubeconfig
	status: {}
	I0805 23:10:31.723177   28839 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.30.3
	I0805 23:10:31.741961   28839 binaries.go:44] Found k8s binaries, skipping transfer
	I0805 23:10:31.742025   28839 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube /etc/kubernetes/manifests
	I0805 23:10:31.752136   28839 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (308 bytes)
	I0805 23:10:31.769564   28839 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0805 23:10:31.786741   28839 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2150 bytes)
	I0805 23:10:31.803558   28839 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1447 bytes)
	I0805 23:10:31.819843   28839 ssh_runner.go:195] Run: grep 192.168.39.254	control-plane.minikube.internal$ /etc/hosts
	I0805 23:10:31.823792   28839 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.254	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0805 23:10:31.836641   28839 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0805 23:10:31.952777   28839 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0805 23:10:31.971266   28839 certs.go:68] Setting up /home/jenkins/minikube-integration/19373-9606/.minikube/profiles/ha-044175 for IP: 192.168.39.57
	I0805 23:10:31.971288   28839 certs.go:194] generating shared ca certs ...
	I0805 23:10:31.971308   28839 certs.go:226] acquiring lock for ca certs: {Name:mkf35a042c1656d191f542eee7fa087aad4d29d8 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0805 23:10:31.971473   28839 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19373-9606/.minikube/ca.key
	I0805 23:10:31.971526   28839 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19373-9606/.minikube/proxy-client-ca.key
	I0805 23:10:31.971540   28839 certs.go:256] generating profile certs ...
	I0805 23:10:31.971600   28839 certs.go:363] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/19373-9606/.minikube/profiles/ha-044175/client.key
	I0805 23:10:31.971619   28839 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19373-9606/.minikube/profiles/ha-044175/client.crt with IP's: []
	I0805 23:10:32.186027   28839 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19373-9606/.minikube/profiles/ha-044175/client.crt ...
	I0805 23:10:32.186060   28839 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19373-9606/.minikube/profiles/ha-044175/client.crt: {Name:mk07f71c36a907c49015b5156e5111b3f5d0282b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0805 23:10:32.186230   28839 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19373-9606/.minikube/profiles/ha-044175/client.key ...
	I0805 23:10:32.186243   28839 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19373-9606/.minikube/profiles/ha-044175/client.key: {Name:mk2231a6094437615475c7cdb6cc571cd5b6ea01 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0805 23:10:32.186317   28839 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/19373-9606/.minikube/profiles/ha-044175/apiserver.key.5603f7db
	I0805 23:10:32.186332   28839 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19373-9606/.minikube/profiles/ha-044175/apiserver.crt.5603f7db with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.39.57 192.168.39.254]
	I0805 23:10:32.420262   28839 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19373-9606/.minikube/profiles/ha-044175/apiserver.crt.5603f7db ...
	I0805 23:10:32.420292   28839 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19373-9606/.minikube/profiles/ha-044175/apiserver.crt.5603f7db: {Name:mk1aaa2ceb51818492d02603eaad68351b66ea14 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0805 23:10:32.420466   28839 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19373-9606/.minikube/profiles/ha-044175/apiserver.key.5603f7db ...
	I0805 23:10:32.420481   28839 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19373-9606/.minikube/profiles/ha-044175/apiserver.key.5603f7db: {Name:mkc15366e9b5c5b24b06f390af1f821c8ba7678a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0805 23:10:32.420566   28839 certs.go:381] copying /home/jenkins/minikube-integration/19373-9606/.minikube/profiles/ha-044175/apiserver.crt.5603f7db -> /home/jenkins/minikube-integration/19373-9606/.minikube/profiles/ha-044175/apiserver.crt
	I0805 23:10:32.420652   28839 certs.go:385] copying /home/jenkins/minikube-integration/19373-9606/.minikube/profiles/ha-044175/apiserver.key.5603f7db -> /home/jenkins/minikube-integration/19373-9606/.minikube/profiles/ha-044175/apiserver.key
	I0805 23:10:32.420712   28839 certs.go:363] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/19373-9606/.minikube/profiles/ha-044175/proxy-client.key
	I0805 23:10:32.420728   28839 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19373-9606/.minikube/profiles/ha-044175/proxy-client.crt with IP's: []
	I0805 23:10:32.833235   28839 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19373-9606/.minikube/profiles/ha-044175/proxy-client.crt ...
	I0805 23:10:32.833268   28839 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19373-9606/.minikube/profiles/ha-044175/proxy-client.crt: {Name:mk8751657827ca3752a30f236a6f3fd31a4706b3 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0805 23:10:32.833425   28839 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19373-9606/.minikube/profiles/ha-044175/proxy-client.key ...
	I0805 23:10:32.833435   28839 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19373-9606/.minikube/profiles/ha-044175/proxy-client.key: {Name:mk06e9887e5410cb0aa672cd986ef1dfbc411de1 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0805 23:10:32.833498   28839 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19373-9606/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I0805 23:10:32.833515   28839 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19373-9606/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I0805 23:10:32.833526   28839 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19373-9606/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0805 23:10:32.833538   28839 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19373-9606/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0805 23:10:32.833551   28839 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19373-9606/.minikube/profiles/ha-044175/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0805 23:10:32.833563   28839 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19373-9606/.minikube/profiles/ha-044175/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0805 23:10:32.833575   28839 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19373-9606/.minikube/profiles/ha-044175/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0805 23:10:32.833587   28839 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19373-9606/.minikube/profiles/ha-044175/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0805 23:10:32.833633   28839 certs.go:484] found cert: /home/jenkins/minikube-integration/19373-9606/.minikube/certs/16792.pem (1338 bytes)
	W0805 23:10:32.833665   28839 certs.go:480] ignoring /home/jenkins/minikube-integration/19373-9606/.minikube/certs/16792_empty.pem, impossibly tiny 0 bytes
	I0805 23:10:32.833674   28839 certs.go:484] found cert: /home/jenkins/minikube-integration/19373-9606/.minikube/certs/ca-key.pem (1679 bytes)
	I0805 23:10:32.833696   28839 certs.go:484] found cert: /home/jenkins/minikube-integration/19373-9606/.minikube/certs/ca.pem (1082 bytes)
	I0805 23:10:32.833719   28839 certs.go:484] found cert: /home/jenkins/minikube-integration/19373-9606/.minikube/certs/cert.pem (1123 bytes)
	I0805 23:10:32.833739   28839 certs.go:484] found cert: /home/jenkins/minikube-integration/19373-9606/.minikube/certs/key.pem (1679 bytes)
	I0805 23:10:32.833777   28839 certs.go:484] found cert: /home/jenkins/minikube-integration/19373-9606/.minikube/files/etc/ssl/certs/167922.pem (1708 bytes)
	I0805 23:10:32.833803   28839 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19373-9606/.minikube/files/etc/ssl/certs/167922.pem -> /usr/share/ca-certificates/167922.pem
	I0805 23:10:32.833818   28839 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19373-9606/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0805 23:10:32.833831   28839 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19373-9606/.minikube/certs/16792.pem -> /usr/share/ca-certificates/16792.pem
	I0805 23:10:32.834399   28839 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19373-9606/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0805 23:10:32.879930   28839 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19373-9606/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0805 23:10:32.913975   28839 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19373-9606/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0805 23:10:32.944571   28839 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19373-9606/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0805 23:10:32.968983   28839 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19373-9606/.minikube/profiles/ha-044175/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I0805 23:10:32.993481   28839 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19373-9606/.minikube/profiles/ha-044175/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0805 23:10:33.017721   28839 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19373-9606/.minikube/profiles/ha-044175/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0805 23:10:33.042186   28839 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19373-9606/.minikube/profiles/ha-044175/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0805 23:10:33.066930   28839 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19373-9606/.minikube/files/etc/ssl/certs/167922.pem --> /usr/share/ca-certificates/167922.pem (1708 bytes)
	I0805 23:10:33.090808   28839 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19373-9606/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0805 23:10:33.115733   28839 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19373-9606/.minikube/certs/16792.pem --> /usr/share/ca-certificates/16792.pem (1338 bytes)
	I0805 23:10:33.139841   28839 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0805 23:10:33.156696   28839 ssh_runner.go:195] Run: openssl version
	I0805 23:10:33.162741   28839 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/167922.pem && ln -fs /usr/share/ca-certificates/167922.pem /etc/ssl/certs/167922.pem"
	I0805 23:10:33.174420   28839 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/167922.pem
	I0805 23:10:33.179263   28839 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Aug  5 23:03 /usr/share/ca-certificates/167922.pem
	I0805 23:10:33.179326   28839 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/167922.pem
	I0805 23:10:33.185184   28839 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/167922.pem /etc/ssl/certs/3ec20f2e.0"
	I0805 23:10:33.196451   28839 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0805 23:10:33.207630   28839 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0805 23:10:33.212291   28839 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Aug  5 22:50 /usr/share/ca-certificates/minikubeCA.pem
	I0805 23:10:33.212352   28839 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0805 23:10:33.218331   28839 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0805 23:10:33.230037   28839 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/16792.pem && ln -fs /usr/share/ca-certificates/16792.pem /etc/ssl/certs/16792.pem"
	I0805 23:10:33.241992   28839 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/16792.pem
	I0805 23:10:33.246697   28839 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Aug  5 23:03 /usr/share/ca-certificates/16792.pem
	I0805 23:10:33.246760   28839 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/16792.pem
	I0805 23:10:33.252598   28839 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/16792.pem /etc/ssl/certs/51391683.0"
	I0805 23:10:33.264770   28839 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0805 23:10:33.269421   28839 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0805 23:10:33.269493   28839 kubeadm.go:392] StartCluster: {Name:ha-044175 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19339/minikube-v1.33.1-1722248113-19339-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.3 Clust
erName:ha-044175 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.57 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 Mount
Type:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0805 23:10:33.269579   28839 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0805 23:10:33.269637   28839 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0805 23:10:33.308173   28839 cri.go:89] found id: ""
	I0805 23:10:33.308241   28839 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0805 23:10:33.319085   28839 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0805 23:10:33.329568   28839 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0805 23:10:33.339739   28839 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0805 23:10:33.339771   28839 kubeadm.go:157] found existing configuration files:
	
	I0805 23:10:33.339822   28839 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0805 23:10:33.349695   28839 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0805 23:10:33.349753   28839 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0805 23:10:33.360133   28839 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0805 23:10:33.369662   28839 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0805 23:10:33.369724   28839 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0805 23:10:33.379529   28839 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0805 23:10:33.389342   28839 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0805 23:10:33.389405   28839 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0805 23:10:33.399427   28839 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0805 23:10:33.408909   28839 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0805 23:10:33.408968   28839 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0805 23:10:33.418772   28839 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0805 23:10:33.523024   28839 kubeadm.go:310] [init] Using Kubernetes version: v1.30.3
	I0805 23:10:33.523150   28839 kubeadm.go:310] [preflight] Running pre-flight checks
	I0805 23:10:33.679643   28839 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0805 23:10:33.679812   28839 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0805 23:10:33.679952   28839 kubeadm.go:310] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0805 23:10:33.897514   28839 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0805 23:10:34.014253   28839 out.go:204]   - Generating certificates and keys ...
	I0805 23:10:34.014389   28839 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0805 23:10:34.014481   28839 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0805 23:10:34.044964   28839 kubeadm.go:310] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0805 23:10:34.226759   28839 kubeadm.go:310] [certs] Generating "front-proxy-ca" certificate and key
	I0805 23:10:34.392949   28839 kubeadm.go:310] [certs] Generating "front-proxy-client" certificate and key
	I0805 23:10:34.864847   28839 kubeadm.go:310] [certs] Generating "etcd/ca" certificate and key
	I0805 23:10:35.000955   28839 kubeadm.go:310] [certs] Generating "etcd/server" certificate and key
	I0805 23:10:35.001097   28839 kubeadm.go:310] [certs] etcd/server serving cert is signed for DNS names [ha-044175 localhost] and IPs [192.168.39.57 127.0.0.1 ::1]
	I0805 23:10:35.063745   28839 kubeadm.go:310] [certs] Generating "etcd/peer" certificate and key
	I0805 23:10:35.063887   28839 kubeadm.go:310] [certs] etcd/peer serving cert is signed for DNS names [ha-044175 localhost] and IPs [192.168.39.57 127.0.0.1 ::1]
	I0805 23:10:35.135024   28839 kubeadm.go:310] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0805 23:10:35.284912   28839 kubeadm.go:310] [certs] Generating "apiserver-etcd-client" certificate and key
	I0805 23:10:35.612132   28839 kubeadm.go:310] [certs] Generating "sa" key and public key
	I0805 23:10:35.612236   28839 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0805 23:10:35.793593   28839 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0805 23:10:36.142430   28839 kubeadm.go:310] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0805 23:10:36.298564   28839 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0805 23:10:36.518325   28839 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0805 23:10:36.593375   28839 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0805 23:10:36.593851   28839 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0805 23:10:36.598532   28839 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0805 23:10:36.600305   28839 out.go:204]   - Booting up control plane ...
	I0805 23:10:36.600408   28839 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0805 23:10:36.600475   28839 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0805 23:10:36.600569   28839 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0805 23:10:36.617451   28839 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0805 23:10:36.618440   28839 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0805 23:10:36.618483   28839 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0805 23:10:36.762821   28839 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I0805 23:10:36.762934   28839 kubeadm.go:310] [kubelet-check] Waiting for a healthy kubelet. This can take up to 4m0s
	I0805 23:10:37.763884   28839 kubeadm.go:310] [kubelet-check] The kubelet is healthy after 1.001788572s
	I0805 23:10:37.764010   28839 kubeadm.go:310] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I0805 23:10:43.677609   28839 kubeadm.go:310] [api-check] The API server is healthy after 5.91580752s
	I0805 23:10:43.695829   28839 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0805 23:10:43.708311   28839 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0805 23:10:44.240678   28839 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I0805 23:10:44.240936   28839 kubeadm.go:310] [mark-control-plane] Marking the node ha-044175 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0805 23:10:44.253385   28839 kubeadm.go:310] [bootstrap-token] Using token: 51mq8e.2hm5gpr21za1prtm
	I0805 23:10:44.254794   28839 out.go:204]   - Configuring RBAC rules ...
	I0805 23:10:44.254893   28839 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0805 23:10:44.260438   28839 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0805 23:10:44.280242   28839 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0805 23:10:44.288406   28839 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0805 23:10:44.292808   28839 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0805 23:10:44.296747   28839 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0805 23:10:44.310474   28839 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0805 23:10:44.581975   28839 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I0805 23:10:45.088834   28839 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I0805 23:10:45.088860   28839 kubeadm.go:310] 
	I0805 23:10:45.088918   28839 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I0805 23:10:45.088925   28839 kubeadm.go:310] 
	I0805 23:10:45.088997   28839 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I0805 23:10:45.089007   28839 kubeadm.go:310] 
	I0805 23:10:45.089043   28839 kubeadm.go:310]   mkdir -p $HOME/.kube
	I0805 23:10:45.089110   28839 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0805 23:10:45.089174   28839 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0805 23:10:45.089184   28839 kubeadm.go:310] 
	I0805 23:10:45.089254   28839 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I0805 23:10:45.089266   28839 kubeadm.go:310] 
	I0805 23:10:45.089305   28839 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0805 23:10:45.089346   28839 kubeadm.go:310] 
	I0805 23:10:45.089431   28839 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I0805 23:10:45.089547   28839 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0805 23:10:45.089663   28839 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0805 23:10:45.089683   28839 kubeadm.go:310] 
	I0805 23:10:45.089865   28839 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I0805 23:10:45.089991   28839 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I0805 23:10:45.090002   28839 kubeadm.go:310] 
	I0805 23:10:45.090115   28839 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8443 --token 51mq8e.2hm5gpr21za1prtm \
	I0805 23:10:45.090263   28839 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:80c3f4a7caafd825f47d5f536053424d1d775e8da247cc5594b6b717e711fcd3 \
	I0805 23:10:45.090288   28839 kubeadm.go:310] 	--control-plane 
	I0805 23:10:45.090302   28839 kubeadm.go:310] 
	I0805 23:10:45.090421   28839 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I0805 23:10:45.090431   28839 kubeadm.go:310] 
	I0805 23:10:45.090530   28839 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8443 --token 51mq8e.2hm5gpr21za1prtm \
	I0805 23:10:45.090702   28839 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:80c3f4a7caafd825f47d5f536053424d1d775e8da247cc5594b6b717e711fcd3 
	I0805 23:10:45.090850   28839 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0805 23:10:45.090872   28839 cni.go:84] Creating CNI manager for ""
	I0805 23:10:45.090880   28839 cni.go:136] multinode detected (1 nodes found), recommending kindnet
	I0805 23:10:45.092852   28839 out.go:177] * Configuring CNI (Container Networking Interface) ...
	I0805 23:10:45.094286   28839 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I0805 23:10:45.100225   28839 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.30.3/kubectl ...
	I0805 23:10:45.100244   28839 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2438 bytes)
	I0805 23:10:45.119568   28839 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I0805 23:10:45.530296   28839 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0805 23:10:45.530372   28839 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0805 23:10:45.530372   28839 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes ha-044175 minikube.k8s.io/updated_at=2024_08_05T23_10_45_0700 minikube.k8s.io/version=v1.33.1 minikube.k8s.io/commit=a179f531dd2dbe55e0d6074abcbc378280f91bb4 minikube.k8s.io/name=ha-044175 minikube.k8s.io/primary=true
	I0805 23:10:45.671664   28839 ops.go:34] apiserver oom_adj: -16
	I0805 23:10:45.672023   28839 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0805 23:10:46.172899   28839 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0805 23:10:46.672497   28839 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0805 23:10:47.173189   28839 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0805 23:10:47.672329   28839 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0805 23:10:48.172915   28839 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0805 23:10:48.672499   28839 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0805 23:10:49.172269   28839 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0805 23:10:49.672460   28839 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0805 23:10:50.172909   28839 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0805 23:10:50.672458   28839 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0805 23:10:51.172249   28839 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0805 23:10:51.672914   28839 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0805 23:10:52.172398   28839 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0805 23:10:52.672732   28839 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0805 23:10:53.172838   28839 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0805 23:10:53.672974   28839 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0805 23:10:54.172694   28839 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0805 23:10:54.672320   28839 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0805 23:10:55.172792   28839 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0805 23:10:55.672691   28839 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0805 23:10:56.172406   28839 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0805 23:10:56.672920   28839 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0805 23:10:57.172844   28839 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0805 23:10:57.275504   28839 kubeadm.go:1113] duration metric: took 11.745187263s to wait for elevateKubeSystemPrivileges
	I0805 23:10:57.275555   28839 kubeadm.go:394] duration metric: took 24.006065425s to StartCluster
	I0805 23:10:57.275610   28839 settings.go:142] acquiring lock: {Name:mkd43028f76794f43f4727efb0b77b9a49886053 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0805 23:10:57.275717   28839 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/19373-9606/kubeconfig
	I0805 23:10:57.276507   28839 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19373-9606/kubeconfig: {Name:mk4481c5dfe578449439dae4abf8681e1b7df535 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0805 23:10:57.276757   28839 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0805 23:10:57.276766   28839 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0805 23:10:57.276822   28839 addons.go:69] Setting storage-provisioner=true in profile "ha-044175"
	I0805 23:10:57.276835   28839 addons.go:69] Setting default-storageclass=true in profile "ha-044175"
	I0805 23:10:57.276854   28839 addons.go:234] Setting addon storage-provisioner=true in "ha-044175"
	I0805 23:10:57.276868   28839 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "ha-044175"
	I0805 23:10:57.276890   28839 host.go:66] Checking if "ha-044175" exists ...
	I0805 23:10:57.276751   28839 start.go:233] HA (multi-control plane) cluster: will skip waiting for primary control-plane node &{Name: IP:192.168.39.57 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0805 23:10:57.276946   28839 config.go:182] Loaded profile config "ha-044175": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0805 23:10:57.276962   28839 start.go:241] waiting for startup goroutines ...
	I0805 23:10:57.277272   28839 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0805 23:10:57.277307   28839 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0805 23:10:57.277327   28839 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0805 23:10:57.277352   28839 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0805 23:10:57.292517   28839 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35767
	I0805 23:10:57.292553   28839 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40049
	I0805 23:10:57.292968   28839 main.go:141] libmachine: () Calling .GetVersion
	I0805 23:10:57.292974   28839 main.go:141] libmachine: () Calling .GetVersion
	I0805 23:10:57.293506   28839 main.go:141] libmachine: Using API Version  1
	I0805 23:10:57.293534   28839 main.go:141] libmachine: () Calling .SetConfigRaw
	I0805 23:10:57.293659   28839 main.go:141] libmachine: Using API Version  1
	I0805 23:10:57.293683   28839 main.go:141] libmachine: () Calling .SetConfigRaw
	I0805 23:10:57.293948   28839 main.go:141] libmachine: () Calling .GetMachineName
	I0805 23:10:57.294073   28839 main.go:141] libmachine: () Calling .GetMachineName
	I0805 23:10:57.294249   28839 main.go:141] libmachine: (ha-044175) Calling .GetState
	I0805 23:10:57.294490   28839 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0805 23:10:57.294520   28839 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0805 23:10:57.296457   28839 loader.go:395] Config loaded from file:  /home/jenkins/minikube-integration/19373-9606/kubeconfig
	I0805 23:10:57.296834   28839 kapi.go:59] client config for ha-044175: &rest.Config{Host:"https://192.168.39.254:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/19373-9606/.minikube/profiles/ha-044175/client.crt", KeyFile:"/home/jenkins/minikube-integration/19373-9606/.minikube/profiles/ha-044175/client.key", CAFile:"/home/jenkins/minikube-integration/19373-9606/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(nil)},
UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1d02940), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0805 23:10:57.297473   28839 cert_rotation.go:137] Starting client certificate rotation controller
	I0805 23:10:57.297778   28839 addons.go:234] Setting addon default-storageclass=true in "ha-044175"
	I0805 23:10:57.297825   28839 host.go:66] Checking if "ha-044175" exists ...
	I0805 23:10:57.298216   28839 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0805 23:10:57.298248   28839 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0805 23:10:57.310071   28839 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36843
	I0805 23:10:57.310475   28839 main.go:141] libmachine: () Calling .GetVersion
	I0805 23:10:57.310970   28839 main.go:141] libmachine: Using API Version  1
	I0805 23:10:57.310997   28839 main.go:141] libmachine: () Calling .SetConfigRaw
	I0805 23:10:57.311316   28839 main.go:141] libmachine: () Calling .GetMachineName
	I0805 23:10:57.311517   28839 main.go:141] libmachine: (ha-044175) Calling .GetState
	I0805 23:10:57.312863   28839 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44167
	I0805 23:10:57.312905   28839 main.go:141] libmachine: (ha-044175) Calling .DriverName
	I0805 23:10:57.313336   28839 main.go:141] libmachine: () Calling .GetVersion
	I0805 23:10:57.313752   28839 main.go:141] libmachine: Using API Version  1
	I0805 23:10:57.313780   28839 main.go:141] libmachine: () Calling .SetConfigRaw
	I0805 23:10:57.314088   28839 main.go:141] libmachine: () Calling .GetMachineName
	I0805 23:10:57.314570   28839 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0805 23:10:57.314596   28839 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0805 23:10:57.315368   28839 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0805 23:10:57.316845   28839 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0805 23:10:57.316864   28839 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0805 23:10:57.316881   28839 main.go:141] libmachine: (ha-044175) Calling .GetSSHHostname
	I0805 23:10:57.319703   28839 main.go:141] libmachine: (ha-044175) DBG | domain ha-044175 has defined MAC address 52:54:00:d0:5f:e4 in network mk-ha-044175
	I0805 23:10:57.320093   28839 main.go:141] libmachine: (ha-044175) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d0:5f:e4", ip: ""} in network mk-ha-044175: {Iface:virbr1 ExpiryTime:2024-08-06 00:10:15 +0000 UTC Type:0 Mac:52:54:00:d0:5f:e4 Iaid: IPaddr:192.168.39.57 Prefix:24 Hostname:ha-044175 Clientid:01:52:54:00:d0:5f:e4}
	I0805 23:10:57.320115   28839 main.go:141] libmachine: (ha-044175) DBG | domain ha-044175 has defined IP address 192.168.39.57 and MAC address 52:54:00:d0:5f:e4 in network mk-ha-044175
	I0805 23:10:57.320374   28839 main.go:141] libmachine: (ha-044175) Calling .GetSSHPort
	I0805 23:10:57.320601   28839 main.go:141] libmachine: (ha-044175) Calling .GetSSHKeyPath
	I0805 23:10:57.320798   28839 main.go:141] libmachine: (ha-044175) Calling .GetSSHUsername
	I0805 23:10:57.320973   28839 sshutil.go:53] new ssh client: &{IP:192.168.39.57 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19373-9606/.minikube/machines/ha-044175/id_rsa Username:docker}
	I0805 23:10:57.330343   28839 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37261
	I0805 23:10:57.330721   28839 main.go:141] libmachine: () Calling .GetVersion
	I0805 23:10:57.331259   28839 main.go:141] libmachine: Using API Version  1
	I0805 23:10:57.331286   28839 main.go:141] libmachine: () Calling .SetConfigRaw
	I0805 23:10:57.331586   28839 main.go:141] libmachine: () Calling .GetMachineName
	I0805 23:10:57.331794   28839 main.go:141] libmachine: (ha-044175) Calling .GetState
	I0805 23:10:57.333476   28839 main.go:141] libmachine: (ha-044175) Calling .DriverName
	I0805 23:10:57.333692   28839 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I0805 23:10:57.333706   28839 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0805 23:10:57.333720   28839 main.go:141] libmachine: (ha-044175) Calling .GetSSHHostname
	I0805 23:10:57.336787   28839 main.go:141] libmachine: (ha-044175) DBG | domain ha-044175 has defined MAC address 52:54:00:d0:5f:e4 in network mk-ha-044175
	I0805 23:10:57.337284   28839 main.go:141] libmachine: (ha-044175) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d0:5f:e4", ip: ""} in network mk-ha-044175: {Iface:virbr1 ExpiryTime:2024-08-06 00:10:15 +0000 UTC Type:0 Mac:52:54:00:d0:5f:e4 Iaid: IPaddr:192.168.39.57 Prefix:24 Hostname:ha-044175 Clientid:01:52:54:00:d0:5f:e4}
	I0805 23:10:57.337314   28839 main.go:141] libmachine: (ha-044175) DBG | domain ha-044175 has defined IP address 192.168.39.57 and MAC address 52:54:00:d0:5f:e4 in network mk-ha-044175
	I0805 23:10:57.337462   28839 main.go:141] libmachine: (ha-044175) Calling .GetSSHPort
	I0805 23:10:57.337656   28839 main.go:141] libmachine: (ha-044175) Calling .GetSSHKeyPath
	I0805 23:10:57.337899   28839 main.go:141] libmachine: (ha-044175) Calling .GetSSHUsername
	I0805 23:10:57.338069   28839 sshutil.go:53] new ssh client: &{IP:192.168.39.57 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19373-9606/.minikube/machines/ha-044175/id_rsa Username:docker}
	I0805 23:10:57.429490   28839 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.39.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.30.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0805 23:10:57.496579   28839 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0805 23:10:57.528658   28839 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0805 23:10:58.062941   28839 start.go:971] {"host.minikube.internal": 192.168.39.1} host record injected into CoreDNS's ConfigMap
	I0805 23:10:58.303807   28839 main.go:141] libmachine: Making call to close driver server
	I0805 23:10:58.303837   28839 main.go:141] libmachine: (ha-044175) Calling .Close
	I0805 23:10:58.303836   28839 main.go:141] libmachine: Making call to close driver server
	I0805 23:10:58.303857   28839 main.go:141] libmachine: (ha-044175) Calling .Close
	I0805 23:10:58.304140   28839 main.go:141] libmachine: Successfully made call to close driver server
	I0805 23:10:58.304154   28839 main.go:141] libmachine: (ha-044175) DBG | Closing plugin on server side
	I0805 23:10:58.304157   28839 main.go:141] libmachine: Making call to close connection to plugin binary
	I0805 23:10:58.304169   28839 main.go:141] libmachine: Making call to close driver server
	I0805 23:10:58.304177   28839 main.go:141] libmachine: (ha-044175) Calling .Close
	I0805 23:10:58.304196   28839 main.go:141] libmachine: Successfully made call to close driver server
	I0805 23:10:58.304212   28839 main.go:141] libmachine: Making call to close connection to plugin binary
	I0805 23:10:58.304248   28839 main.go:141] libmachine: Making call to close driver server
	I0805 23:10:58.304286   28839 main.go:141] libmachine: (ha-044175) Calling .Close
	I0805 23:10:58.304214   28839 main.go:141] libmachine: (ha-044175) DBG | Closing plugin on server side
	I0805 23:10:58.304417   28839 main.go:141] libmachine: Successfully made call to close driver server
	I0805 23:10:58.304430   28839 main.go:141] libmachine: Making call to close connection to plugin binary
	I0805 23:10:58.304494   28839 main.go:141] libmachine: (ha-044175) DBG | Closing plugin on server side
	I0805 23:10:58.304505   28839 main.go:141] libmachine: Successfully made call to close driver server
	I0805 23:10:58.304516   28839 main.go:141] libmachine: Making call to close connection to plugin binary
	I0805 23:10:58.304683   28839 round_trippers.go:463] GET https://192.168.39.254:8443/apis/storage.k8s.io/v1/storageclasses
	I0805 23:10:58.304690   28839 round_trippers.go:469] Request Headers:
	I0805 23:10:58.304697   28839 round_trippers.go:473]     Accept: application/json, */*
	I0805 23:10:58.304701   28839 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0805 23:10:58.317901   28839 round_trippers.go:574] Response Status: 200 OK in 13 milliseconds
	I0805 23:10:58.319494   28839 round_trippers.go:463] PUT https://192.168.39.254:8443/apis/storage.k8s.io/v1/storageclasses/standard
	I0805 23:10:58.319513   28839 round_trippers.go:469] Request Headers:
	I0805 23:10:58.319524   28839 round_trippers.go:473]     Accept: application/json, */*
	I0805 23:10:58.319532   28839 round_trippers.go:473]     Content-Type: application/json
	I0805 23:10:58.319537   28839 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0805 23:10:58.321833   28839 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0805 23:10:58.321997   28839 main.go:141] libmachine: Making call to close driver server
	I0805 23:10:58.322014   28839 main.go:141] libmachine: (ha-044175) Calling .Close
	I0805 23:10:58.322347   28839 main.go:141] libmachine: (ha-044175) DBG | Closing plugin on server side
	I0805 23:10:58.322370   28839 main.go:141] libmachine: Successfully made call to close driver server
	I0805 23:10:58.322379   28839 main.go:141] libmachine: Making call to close connection to plugin binary
	I0805 23:10:58.324236   28839 out.go:177] * Enabled addons: storage-provisioner, default-storageclass
	I0805 23:10:58.325438   28839 addons.go:510] duration metric: took 1.048665605s for enable addons: enabled=[storage-provisioner default-storageclass]
	I0805 23:10:58.325495   28839 start.go:246] waiting for cluster config update ...
	I0805 23:10:58.325514   28839 start.go:255] writing updated cluster config ...
	I0805 23:10:58.327096   28839 out.go:177] 
	I0805 23:10:58.328594   28839 config.go:182] Loaded profile config "ha-044175": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0805 23:10:58.328685   28839 profile.go:143] Saving config to /home/jenkins/minikube-integration/19373-9606/.minikube/profiles/ha-044175/config.json ...
	I0805 23:10:58.330675   28839 out.go:177] * Starting "ha-044175-m02" control-plane node in "ha-044175" cluster
	I0805 23:10:58.332169   28839 preload.go:131] Checking if preload exists for k8s version v1.30.3 and runtime crio
	I0805 23:10:58.332199   28839 cache.go:56] Caching tarball of preloaded images
	I0805 23:10:58.332318   28839 preload.go:172] Found /home/jenkins/minikube-integration/19373-9606/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0805 23:10:58.332335   28839 cache.go:59] Finished verifying existence of preloaded tar for v1.30.3 on crio
	I0805 23:10:58.332425   28839 profile.go:143] Saving config to /home/jenkins/minikube-integration/19373-9606/.minikube/profiles/ha-044175/config.json ...
	I0805 23:10:58.332683   28839 start.go:360] acquireMachinesLock for ha-044175-m02: {Name:mkd2ba511c39504598222edbf83078b718329186 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0805 23:10:58.332751   28839 start.go:364] duration metric: took 38.793µs to acquireMachinesLock for "ha-044175-m02"
	I0805 23:10:58.332778   28839 start.go:93] Provisioning new machine with config: &{Name:ha-044175 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19339/minikube-v1.33.1-1722248113-19339-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kubernete
sVersion:v1.30.3 ClusterName:ha-044175 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.57 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 Cert
Expiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name:m02 IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0805 23:10:58.332926   28839 start.go:125] createHost starting for "m02" (driver="kvm2")
	I0805 23:10:58.334475   28839 out.go:204] * Creating kvm2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0805 23:10:58.334588   28839 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0805 23:10:58.334623   28839 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0805 23:10:58.349611   28839 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38127
	I0805 23:10:58.350167   28839 main.go:141] libmachine: () Calling .GetVersion
	I0805 23:10:58.350729   28839 main.go:141] libmachine: Using API Version  1
	I0805 23:10:58.350750   28839 main.go:141] libmachine: () Calling .SetConfigRaw
	I0805 23:10:58.351125   28839 main.go:141] libmachine: () Calling .GetMachineName
	I0805 23:10:58.351336   28839 main.go:141] libmachine: (ha-044175-m02) Calling .GetMachineName
	I0805 23:10:58.351515   28839 main.go:141] libmachine: (ha-044175-m02) Calling .DriverName
	I0805 23:10:58.351685   28839 start.go:159] libmachine.API.Create for "ha-044175" (driver="kvm2")
	I0805 23:10:58.351712   28839 client.go:168] LocalClient.Create starting
	I0805 23:10:58.351789   28839 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/19373-9606/.minikube/certs/ca.pem
	I0805 23:10:58.351877   28839 main.go:141] libmachine: Decoding PEM data...
	I0805 23:10:58.351913   28839 main.go:141] libmachine: Parsing certificate...
	I0805 23:10:58.351991   28839 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/19373-9606/.minikube/certs/cert.pem
	I0805 23:10:58.352020   28839 main.go:141] libmachine: Decoding PEM data...
	I0805 23:10:58.352036   28839 main.go:141] libmachine: Parsing certificate...
	I0805 23:10:58.352061   28839 main.go:141] libmachine: Running pre-create checks...
	I0805 23:10:58.352072   28839 main.go:141] libmachine: (ha-044175-m02) Calling .PreCreateCheck
	I0805 23:10:58.352302   28839 main.go:141] libmachine: (ha-044175-m02) Calling .GetConfigRaw
	I0805 23:10:58.352725   28839 main.go:141] libmachine: Creating machine...
	I0805 23:10:58.352741   28839 main.go:141] libmachine: (ha-044175-m02) Calling .Create
	I0805 23:10:58.352951   28839 main.go:141] libmachine: (ha-044175-m02) Creating KVM machine...
	I0805 23:10:58.354325   28839 main.go:141] libmachine: (ha-044175-m02) DBG | found existing default KVM network
	I0805 23:10:58.354597   28839 main.go:141] libmachine: (ha-044175-m02) DBG | found existing private KVM network mk-ha-044175
	I0805 23:10:58.354812   28839 main.go:141] libmachine: (ha-044175-m02) Setting up store path in /home/jenkins/minikube-integration/19373-9606/.minikube/machines/ha-044175-m02 ...
	I0805 23:10:58.354866   28839 main.go:141] libmachine: (ha-044175-m02) Building disk image from file:///home/jenkins/minikube-integration/19373-9606/.minikube/cache/iso/amd64/minikube-v1.33.1-1722248113-19339-amd64.iso
	I0805 23:10:58.354889   28839 main.go:141] libmachine: (ha-044175-m02) DBG | I0805 23:10:58.354789   29230 common.go:145] Making disk image using store path: /home/jenkins/minikube-integration/19373-9606/.minikube
	I0805 23:10:58.355017   28839 main.go:141] libmachine: (ha-044175-m02) Downloading /home/jenkins/minikube-integration/19373-9606/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/19373-9606/.minikube/cache/iso/amd64/minikube-v1.33.1-1722248113-19339-amd64.iso...
	I0805 23:10:58.586150   28839 main.go:141] libmachine: (ha-044175-m02) DBG | I0805 23:10:58.585975   29230 common.go:152] Creating ssh key: /home/jenkins/minikube-integration/19373-9606/.minikube/machines/ha-044175-m02/id_rsa...
	I0805 23:10:58.799311   28839 main.go:141] libmachine: (ha-044175-m02) DBG | I0805 23:10:58.799193   29230 common.go:158] Creating raw disk image: /home/jenkins/minikube-integration/19373-9606/.minikube/machines/ha-044175-m02/ha-044175-m02.rawdisk...
	I0805 23:10:58.799343   28839 main.go:141] libmachine: (ha-044175-m02) DBG | Writing magic tar header
	I0805 23:10:58.799362   28839 main.go:141] libmachine: (ha-044175-m02) DBG | Writing SSH key tar header
	I0805 23:10:58.799425   28839 main.go:141] libmachine: (ha-044175-m02) DBG | I0805 23:10:58.799355   29230 common.go:172] Fixing permissions on /home/jenkins/minikube-integration/19373-9606/.minikube/machines/ha-044175-m02 ...
	I0805 23:10:58.799483   28839 main.go:141] libmachine: (ha-044175-m02) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19373-9606/.minikube/machines/ha-044175-m02
	I0805 23:10:58.799505   28839 main.go:141] libmachine: (ha-044175-m02) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19373-9606/.minikube/machines
	I0805 23:10:58.799519   28839 main.go:141] libmachine: (ha-044175-m02) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19373-9606/.minikube
	I0805 23:10:58.799535   28839 main.go:141] libmachine: (ha-044175-m02) Setting executable bit set on /home/jenkins/minikube-integration/19373-9606/.minikube/machines/ha-044175-m02 (perms=drwx------)
	I0805 23:10:58.799549   28839 main.go:141] libmachine: (ha-044175-m02) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19373-9606
	I0805 23:10:58.799567   28839 main.go:141] libmachine: (ha-044175-m02) DBG | Checking permissions on dir: /home/jenkins/minikube-integration
	I0805 23:10:58.799578   28839 main.go:141] libmachine: (ha-044175-m02) DBG | Checking permissions on dir: /home/jenkins
	I0805 23:10:58.799591   28839 main.go:141] libmachine: (ha-044175-m02) DBG | Checking permissions on dir: /home
	I0805 23:10:58.799605   28839 main.go:141] libmachine: (ha-044175-m02) DBG | Skipping /home - not owner
	I0805 23:10:58.799622   28839 main.go:141] libmachine: (ha-044175-m02) Setting executable bit set on /home/jenkins/minikube-integration/19373-9606/.minikube/machines (perms=drwxr-xr-x)
	I0805 23:10:58.799640   28839 main.go:141] libmachine: (ha-044175-m02) Setting executable bit set on /home/jenkins/minikube-integration/19373-9606/.minikube (perms=drwxr-xr-x)
	I0805 23:10:58.799654   28839 main.go:141] libmachine: (ha-044175-m02) Setting executable bit set on /home/jenkins/minikube-integration/19373-9606 (perms=drwxrwxr-x)
	I0805 23:10:58.799671   28839 main.go:141] libmachine: (ha-044175-m02) Setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I0805 23:10:58.799683   28839 main.go:141] libmachine: (ha-044175-m02) Setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I0805 23:10:58.799694   28839 main.go:141] libmachine: (ha-044175-m02) Creating domain...
	I0805 23:10:58.800749   28839 main.go:141] libmachine: (ha-044175-m02) define libvirt domain using xml: 
	I0805 23:10:58.800770   28839 main.go:141] libmachine: (ha-044175-m02) <domain type='kvm'>
	I0805 23:10:58.800803   28839 main.go:141] libmachine: (ha-044175-m02)   <name>ha-044175-m02</name>
	I0805 23:10:58.800826   28839 main.go:141] libmachine: (ha-044175-m02)   <memory unit='MiB'>2200</memory>
	I0805 23:10:58.800839   28839 main.go:141] libmachine: (ha-044175-m02)   <vcpu>2</vcpu>
	I0805 23:10:58.800848   28839 main.go:141] libmachine: (ha-044175-m02)   <features>
	I0805 23:10:58.800859   28839 main.go:141] libmachine: (ha-044175-m02)     <acpi/>
	I0805 23:10:58.800880   28839 main.go:141] libmachine: (ha-044175-m02)     <apic/>
	I0805 23:10:58.800891   28839 main.go:141] libmachine: (ha-044175-m02)     <pae/>
	I0805 23:10:58.800898   28839 main.go:141] libmachine: (ha-044175-m02)     
	I0805 23:10:58.800909   28839 main.go:141] libmachine: (ha-044175-m02)   </features>
	I0805 23:10:58.800921   28839 main.go:141] libmachine: (ha-044175-m02)   <cpu mode='host-passthrough'>
	I0805 23:10:58.800932   28839 main.go:141] libmachine: (ha-044175-m02)   
	I0805 23:10:58.800942   28839 main.go:141] libmachine: (ha-044175-m02)   </cpu>
	I0805 23:10:58.800953   28839 main.go:141] libmachine: (ha-044175-m02)   <os>
	I0805 23:10:58.800963   28839 main.go:141] libmachine: (ha-044175-m02)     <type>hvm</type>
	I0805 23:10:58.800974   28839 main.go:141] libmachine: (ha-044175-m02)     <boot dev='cdrom'/>
	I0805 23:10:58.800984   28839 main.go:141] libmachine: (ha-044175-m02)     <boot dev='hd'/>
	I0805 23:10:58.800993   28839 main.go:141] libmachine: (ha-044175-m02)     <bootmenu enable='no'/>
	I0805 23:10:58.801002   28839 main.go:141] libmachine: (ha-044175-m02)   </os>
	I0805 23:10:58.801010   28839 main.go:141] libmachine: (ha-044175-m02)   <devices>
	I0805 23:10:58.801020   28839 main.go:141] libmachine: (ha-044175-m02)     <disk type='file' device='cdrom'>
	I0805 23:10:58.801036   28839 main.go:141] libmachine: (ha-044175-m02)       <source file='/home/jenkins/minikube-integration/19373-9606/.minikube/machines/ha-044175-m02/boot2docker.iso'/>
	I0805 23:10:58.801046   28839 main.go:141] libmachine: (ha-044175-m02)       <target dev='hdc' bus='scsi'/>
	I0805 23:10:58.801055   28839 main.go:141] libmachine: (ha-044175-m02)       <readonly/>
	I0805 23:10:58.801065   28839 main.go:141] libmachine: (ha-044175-m02)     </disk>
	I0805 23:10:58.801074   28839 main.go:141] libmachine: (ha-044175-m02)     <disk type='file' device='disk'>
	I0805 23:10:58.801086   28839 main.go:141] libmachine: (ha-044175-m02)       <driver name='qemu' type='raw' cache='default' io='threads' />
	I0805 23:10:58.801101   28839 main.go:141] libmachine: (ha-044175-m02)       <source file='/home/jenkins/minikube-integration/19373-9606/.minikube/machines/ha-044175-m02/ha-044175-m02.rawdisk'/>
	I0805 23:10:58.801111   28839 main.go:141] libmachine: (ha-044175-m02)       <target dev='hda' bus='virtio'/>
	I0805 23:10:58.801122   28839 main.go:141] libmachine: (ha-044175-m02)     </disk>
	I0805 23:10:58.801130   28839 main.go:141] libmachine: (ha-044175-m02)     <interface type='network'>
	I0805 23:10:58.801145   28839 main.go:141] libmachine: (ha-044175-m02)       <source network='mk-ha-044175'/>
	I0805 23:10:58.801155   28839 main.go:141] libmachine: (ha-044175-m02)       <model type='virtio'/>
	I0805 23:10:58.801162   28839 main.go:141] libmachine: (ha-044175-m02)     </interface>
	I0805 23:10:58.801175   28839 main.go:141] libmachine: (ha-044175-m02)     <interface type='network'>
	I0805 23:10:58.801183   28839 main.go:141] libmachine: (ha-044175-m02)       <source network='default'/>
	I0805 23:10:58.801193   28839 main.go:141] libmachine: (ha-044175-m02)       <model type='virtio'/>
	I0805 23:10:58.801205   28839 main.go:141] libmachine: (ha-044175-m02)     </interface>
	I0805 23:10:58.801214   28839 main.go:141] libmachine: (ha-044175-m02)     <serial type='pty'>
	I0805 23:10:58.801223   28839 main.go:141] libmachine: (ha-044175-m02)       <target port='0'/>
	I0805 23:10:58.801231   28839 main.go:141] libmachine: (ha-044175-m02)     </serial>
	I0805 23:10:58.801238   28839 main.go:141] libmachine: (ha-044175-m02)     <console type='pty'>
	I0805 23:10:58.801248   28839 main.go:141] libmachine: (ha-044175-m02)       <target type='serial' port='0'/>
	I0805 23:10:58.801256   28839 main.go:141] libmachine: (ha-044175-m02)     </console>
	I0805 23:10:58.801268   28839 main.go:141] libmachine: (ha-044175-m02)     <rng model='virtio'>
	I0805 23:10:58.801277   28839 main.go:141] libmachine: (ha-044175-m02)       <backend model='random'>/dev/random</backend>
	I0805 23:10:58.801285   28839 main.go:141] libmachine: (ha-044175-m02)     </rng>
	I0805 23:10:58.801293   28839 main.go:141] libmachine: (ha-044175-m02)     
	I0805 23:10:58.801301   28839 main.go:141] libmachine: (ha-044175-m02)     
	I0805 23:10:58.801308   28839 main.go:141] libmachine: (ha-044175-m02)   </devices>
	I0805 23:10:58.801318   28839 main.go:141] libmachine: (ha-044175-m02) </domain>
	I0805 23:10:58.801327   28839 main.go:141] libmachine: (ha-044175-m02) 
	I0805 23:10:58.807890   28839 main.go:141] libmachine: (ha-044175-m02) DBG | domain ha-044175-m02 has defined MAC address 52:54:00:99:c3:ce in network default
	I0805 23:10:58.808449   28839 main.go:141] libmachine: (ha-044175-m02) Ensuring networks are active...
	I0805 23:10:58.808501   28839 main.go:141] libmachine: (ha-044175-m02) DBG | domain ha-044175-m02 has defined MAC address 52:54:00:84:bb:47 in network mk-ha-044175
	I0805 23:10:58.809157   28839 main.go:141] libmachine: (ha-044175-m02) Ensuring network default is active
	I0805 23:10:58.809406   28839 main.go:141] libmachine: (ha-044175-m02) Ensuring network mk-ha-044175 is active
	I0805 23:10:58.809712   28839 main.go:141] libmachine: (ha-044175-m02) Getting domain xml...
	I0805 23:10:58.810292   28839 main.go:141] libmachine: (ha-044175-m02) Creating domain...
	I0805 23:11:00.027893   28839 main.go:141] libmachine: (ha-044175-m02) Waiting to get IP...
	I0805 23:11:00.028630   28839 main.go:141] libmachine: (ha-044175-m02) DBG | domain ha-044175-m02 has defined MAC address 52:54:00:84:bb:47 in network mk-ha-044175
	I0805 23:11:00.029203   28839 main.go:141] libmachine: (ha-044175-m02) DBG | unable to find current IP address of domain ha-044175-m02 in network mk-ha-044175
	I0805 23:11:00.029235   28839 main.go:141] libmachine: (ha-044175-m02) DBG | I0805 23:11:00.029161   29230 retry.go:31] will retry after 248.488515ms: waiting for machine to come up
	I0805 23:11:00.279766   28839 main.go:141] libmachine: (ha-044175-m02) DBG | domain ha-044175-m02 has defined MAC address 52:54:00:84:bb:47 in network mk-ha-044175
	I0805 23:11:00.280307   28839 main.go:141] libmachine: (ha-044175-m02) DBG | unable to find current IP address of domain ha-044175-m02 in network mk-ha-044175
	I0805 23:11:00.280335   28839 main.go:141] libmachine: (ha-044175-m02) DBG | I0805 23:11:00.280249   29230 retry.go:31] will retry after 355.99083ms: waiting for machine to come up
	I0805 23:11:00.638118   28839 main.go:141] libmachine: (ha-044175-m02) DBG | domain ha-044175-m02 has defined MAC address 52:54:00:84:bb:47 in network mk-ha-044175
	I0805 23:11:00.638625   28839 main.go:141] libmachine: (ha-044175-m02) DBG | unable to find current IP address of domain ha-044175-m02 in network mk-ha-044175
	I0805 23:11:00.638652   28839 main.go:141] libmachine: (ha-044175-m02) DBG | I0805 23:11:00.638592   29230 retry.go:31] will retry after 297.161612ms: waiting for machine to come up
	I0805 23:11:00.937132   28839 main.go:141] libmachine: (ha-044175-m02) DBG | domain ha-044175-m02 has defined MAC address 52:54:00:84:bb:47 in network mk-ha-044175
	I0805 23:11:00.937612   28839 main.go:141] libmachine: (ha-044175-m02) DBG | unable to find current IP address of domain ha-044175-m02 in network mk-ha-044175
	I0805 23:11:00.937643   28839 main.go:141] libmachine: (ha-044175-m02) DBG | I0805 23:11:00.937553   29230 retry.go:31] will retry after 401.402039ms: waiting for machine to come up
	I0805 23:11:01.340305   28839 main.go:141] libmachine: (ha-044175-m02) DBG | domain ha-044175-m02 has defined MAC address 52:54:00:84:bb:47 in network mk-ha-044175
	I0805 23:11:01.340858   28839 main.go:141] libmachine: (ha-044175-m02) DBG | unable to find current IP address of domain ha-044175-m02 in network mk-ha-044175
	I0805 23:11:01.340884   28839 main.go:141] libmachine: (ha-044175-m02) DBG | I0805 23:11:01.340832   29230 retry.go:31] will retry after 485.040791ms: waiting for machine to come up
	I0805 23:11:01.827501   28839 main.go:141] libmachine: (ha-044175-m02) DBG | domain ha-044175-m02 has defined MAC address 52:54:00:84:bb:47 in network mk-ha-044175
	I0805 23:11:01.827967   28839 main.go:141] libmachine: (ha-044175-m02) DBG | unable to find current IP address of domain ha-044175-m02 in network mk-ha-044175
	I0805 23:11:01.827991   28839 main.go:141] libmachine: (ha-044175-m02) DBG | I0805 23:11:01.827903   29230 retry.go:31] will retry after 934.253059ms: waiting for machine to come up
	I0805 23:11:02.764170   28839 main.go:141] libmachine: (ha-044175-m02) DBG | domain ha-044175-m02 has defined MAC address 52:54:00:84:bb:47 in network mk-ha-044175
	I0805 23:11:02.764627   28839 main.go:141] libmachine: (ha-044175-m02) DBG | unable to find current IP address of domain ha-044175-m02 in network mk-ha-044175
	I0805 23:11:02.764689   28839 main.go:141] libmachine: (ha-044175-m02) DBG | I0805 23:11:02.764598   29230 retry.go:31] will retry after 896.946537ms: waiting for machine to come up
	I0805 23:11:03.663096   28839 main.go:141] libmachine: (ha-044175-m02) DBG | domain ha-044175-m02 has defined MAC address 52:54:00:84:bb:47 in network mk-ha-044175
	I0805 23:11:03.663641   28839 main.go:141] libmachine: (ha-044175-m02) DBG | unable to find current IP address of domain ha-044175-m02 in network mk-ha-044175
	I0805 23:11:03.663673   28839 main.go:141] libmachine: (ha-044175-m02) DBG | I0805 23:11:03.663581   29230 retry.go:31] will retry after 923.400753ms: waiting for machine to come up
	I0805 23:11:04.588190   28839 main.go:141] libmachine: (ha-044175-m02) DBG | domain ha-044175-m02 has defined MAC address 52:54:00:84:bb:47 in network mk-ha-044175
	I0805 23:11:04.588678   28839 main.go:141] libmachine: (ha-044175-m02) DBG | unable to find current IP address of domain ha-044175-m02 in network mk-ha-044175
	I0805 23:11:04.588713   28839 main.go:141] libmachine: (ha-044175-m02) DBG | I0805 23:11:04.588629   29230 retry.go:31] will retry after 1.43340992s: waiting for machine to come up
	I0805 23:11:06.024240   28839 main.go:141] libmachine: (ha-044175-m02) DBG | domain ha-044175-m02 has defined MAC address 52:54:00:84:bb:47 in network mk-ha-044175
	I0805 23:11:06.024737   28839 main.go:141] libmachine: (ha-044175-m02) DBG | unable to find current IP address of domain ha-044175-m02 in network mk-ha-044175
	I0805 23:11:06.024773   28839 main.go:141] libmachine: (ha-044175-m02) DBG | I0805 23:11:06.024699   29230 retry.go:31] will retry after 1.530394502s: waiting for machine to come up
	I0805 23:11:07.556260   28839 main.go:141] libmachine: (ha-044175-m02) DBG | domain ha-044175-m02 has defined MAC address 52:54:00:84:bb:47 in network mk-ha-044175
	I0805 23:11:07.556768   28839 main.go:141] libmachine: (ha-044175-m02) DBG | unable to find current IP address of domain ha-044175-m02 in network mk-ha-044175
	I0805 23:11:07.556795   28839 main.go:141] libmachine: (ha-044175-m02) DBG | I0805 23:11:07.556712   29230 retry.go:31] will retry after 2.88336861s: waiting for machine to come up
	I0805 23:11:10.441210   28839 main.go:141] libmachine: (ha-044175-m02) DBG | domain ha-044175-m02 has defined MAC address 52:54:00:84:bb:47 in network mk-ha-044175
	I0805 23:11:10.441647   28839 main.go:141] libmachine: (ha-044175-m02) DBG | unable to find current IP address of domain ha-044175-m02 in network mk-ha-044175
	I0805 23:11:10.441678   28839 main.go:141] libmachine: (ha-044175-m02) DBG | I0805 23:11:10.441597   29230 retry.go:31] will retry after 3.081446368s: waiting for machine to come up
	I0805 23:11:13.525137   28839 main.go:141] libmachine: (ha-044175-m02) DBG | domain ha-044175-m02 has defined MAC address 52:54:00:84:bb:47 in network mk-ha-044175
	I0805 23:11:13.525456   28839 main.go:141] libmachine: (ha-044175-m02) DBG | unable to find current IP address of domain ha-044175-m02 in network mk-ha-044175
	I0805 23:11:13.525498   28839 main.go:141] libmachine: (ha-044175-m02) DBG | I0805 23:11:13.525431   29230 retry.go:31] will retry after 4.471112661s: waiting for machine to come up
	I0805 23:11:18.000407   28839 main.go:141] libmachine: (ha-044175-m02) DBG | domain ha-044175-m02 has defined MAC address 52:54:00:84:bb:47 in network mk-ha-044175
	I0805 23:11:18.000819   28839 main.go:141] libmachine: (ha-044175-m02) DBG | unable to find current IP address of domain ha-044175-m02 in network mk-ha-044175
	I0805 23:11:18.000837   28839 main.go:141] libmachine: (ha-044175-m02) DBG | I0805 23:11:18.000779   29230 retry.go:31] will retry after 5.282329341s: waiting for machine to come up
	I0805 23:11:23.288261   28839 main.go:141] libmachine: (ha-044175-m02) DBG | domain ha-044175-m02 has defined MAC address 52:54:00:84:bb:47 in network mk-ha-044175
	I0805 23:11:23.288807   28839 main.go:141] libmachine: (ha-044175-m02) Found IP for machine: 192.168.39.112
	I0805 23:11:23.288835   28839 main.go:141] libmachine: (ha-044175-m02) DBG | domain ha-044175-m02 has current primary IP address 192.168.39.112 and MAC address 52:54:00:84:bb:47 in network mk-ha-044175
	I0805 23:11:23.288844   28839 main.go:141] libmachine: (ha-044175-m02) Reserving static IP address...
	I0805 23:11:23.289387   28839 main.go:141] libmachine: (ha-044175-m02) DBG | unable to find host DHCP lease matching {name: "ha-044175-m02", mac: "52:54:00:84:bb:47", ip: "192.168.39.112"} in network mk-ha-044175
	I0805 23:11:23.364345   28839 main.go:141] libmachine: (ha-044175-m02) DBG | Getting to WaitForSSH function...
	I0805 23:11:23.364386   28839 main.go:141] libmachine: (ha-044175-m02) Reserved static IP address: 192.168.39.112
	I0805 23:11:23.364401   28839 main.go:141] libmachine: (ha-044175-m02) Waiting for SSH to be available...
	I0805 23:11:23.366926   28839 main.go:141] libmachine: (ha-044175-m02) DBG | domain ha-044175-m02 has defined MAC address 52:54:00:84:bb:47 in network mk-ha-044175
	I0805 23:11:23.367273   28839 main.go:141] libmachine: (ha-044175-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:84:bb:47", ip: ""} in network mk-ha-044175: {Iface:virbr1 ExpiryTime:2024-08-06 00:11:13 +0000 UTC Type:0 Mac:52:54:00:84:bb:47 Iaid: IPaddr:192.168.39.112 Prefix:24 Hostname:minikube Clientid:01:52:54:00:84:bb:47}
	I0805 23:11:23.367305   28839 main.go:141] libmachine: (ha-044175-m02) DBG | domain ha-044175-m02 has defined IP address 192.168.39.112 and MAC address 52:54:00:84:bb:47 in network mk-ha-044175
	I0805 23:11:23.367491   28839 main.go:141] libmachine: (ha-044175-m02) DBG | Using SSH client type: external
	I0805 23:11:23.367512   28839 main.go:141] libmachine: (ha-044175-m02) DBG | Using SSH private key: /home/jenkins/minikube-integration/19373-9606/.minikube/machines/ha-044175-m02/id_rsa (-rw-------)
	I0805 23:11:23.367541   28839 main.go:141] libmachine: (ha-044175-m02) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.112 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19373-9606/.minikube/machines/ha-044175-m02/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0805 23:11:23.367559   28839 main.go:141] libmachine: (ha-044175-m02) DBG | About to run SSH command:
	I0805 23:11:23.367579   28839 main.go:141] libmachine: (ha-044175-m02) DBG | exit 0
	I0805 23:11:23.496309   28839 main.go:141] libmachine: (ha-044175-m02) DBG | SSH cmd err, output: <nil>: 
	I0805 23:11:23.496557   28839 main.go:141] libmachine: (ha-044175-m02) KVM machine creation complete!
	I0805 23:11:23.496917   28839 main.go:141] libmachine: (ha-044175-m02) Calling .GetConfigRaw
	I0805 23:11:23.497407   28839 main.go:141] libmachine: (ha-044175-m02) Calling .DriverName
	I0805 23:11:23.497585   28839 main.go:141] libmachine: (ha-044175-m02) Calling .DriverName
	I0805 23:11:23.497727   28839 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I0805 23:11:23.497741   28839 main.go:141] libmachine: (ha-044175-m02) Calling .GetState
	I0805 23:11:23.499071   28839 main.go:141] libmachine: Detecting operating system of created instance...
	I0805 23:11:23.499103   28839 main.go:141] libmachine: Waiting for SSH to be available...
	I0805 23:11:23.499111   28839 main.go:141] libmachine: Getting to WaitForSSH function...
	I0805 23:11:23.499122   28839 main.go:141] libmachine: (ha-044175-m02) Calling .GetSSHHostname
	I0805 23:11:23.501648   28839 main.go:141] libmachine: (ha-044175-m02) DBG | domain ha-044175-m02 has defined MAC address 52:54:00:84:bb:47 in network mk-ha-044175
	I0805 23:11:23.502021   28839 main.go:141] libmachine: (ha-044175-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:84:bb:47", ip: ""} in network mk-ha-044175: {Iface:virbr1 ExpiryTime:2024-08-06 00:11:13 +0000 UTC Type:0 Mac:52:54:00:84:bb:47 Iaid: IPaddr:192.168.39.112 Prefix:24 Hostname:ha-044175-m02 Clientid:01:52:54:00:84:bb:47}
	I0805 23:11:23.502050   28839 main.go:141] libmachine: (ha-044175-m02) DBG | domain ha-044175-m02 has defined IP address 192.168.39.112 and MAC address 52:54:00:84:bb:47 in network mk-ha-044175
	I0805 23:11:23.502161   28839 main.go:141] libmachine: (ha-044175-m02) Calling .GetSSHPort
	I0805 23:11:23.502340   28839 main.go:141] libmachine: (ha-044175-m02) Calling .GetSSHKeyPath
	I0805 23:11:23.502515   28839 main.go:141] libmachine: (ha-044175-m02) Calling .GetSSHKeyPath
	I0805 23:11:23.502637   28839 main.go:141] libmachine: (ha-044175-m02) Calling .GetSSHUsername
	I0805 23:11:23.502830   28839 main.go:141] libmachine: Using SSH client type: native
	I0805 23:11:23.503019   28839 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.112 22 <nil> <nil>}
	I0805 23:11:23.503032   28839 main.go:141] libmachine: About to run SSH command:
	exit 0
	I0805 23:11:23.610545   28839 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0805 23:11:23.610566   28839 main.go:141] libmachine: Detecting the provisioner...
	I0805 23:11:23.610576   28839 main.go:141] libmachine: (ha-044175-m02) Calling .GetSSHHostname
	I0805 23:11:23.613574   28839 main.go:141] libmachine: (ha-044175-m02) DBG | domain ha-044175-m02 has defined MAC address 52:54:00:84:bb:47 in network mk-ha-044175
	I0805 23:11:23.613964   28839 main.go:141] libmachine: (ha-044175-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:84:bb:47", ip: ""} in network mk-ha-044175: {Iface:virbr1 ExpiryTime:2024-08-06 00:11:13 +0000 UTC Type:0 Mac:52:54:00:84:bb:47 Iaid: IPaddr:192.168.39.112 Prefix:24 Hostname:ha-044175-m02 Clientid:01:52:54:00:84:bb:47}
	I0805 23:11:23.613993   28839 main.go:141] libmachine: (ha-044175-m02) DBG | domain ha-044175-m02 has defined IP address 192.168.39.112 and MAC address 52:54:00:84:bb:47 in network mk-ha-044175
	I0805 23:11:23.614108   28839 main.go:141] libmachine: (ha-044175-m02) Calling .GetSSHPort
	I0805 23:11:23.614273   28839 main.go:141] libmachine: (ha-044175-m02) Calling .GetSSHKeyPath
	I0805 23:11:23.614473   28839 main.go:141] libmachine: (ha-044175-m02) Calling .GetSSHKeyPath
	I0805 23:11:23.614639   28839 main.go:141] libmachine: (ha-044175-m02) Calling .GetSSHUsername
	I0805 23:11:23.614851   28839 main.go:141] libmachine: Using SSH client type: native
	I0805 23:11:23.615022   28839 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.112 22 <nil> <nil>}
	I0805 23:11:23.615035   28839 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I0805 23:11:23.720105   28839 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2023.02.9-dirty
	ID=buildroot
	VERSION_ID=2023.02.9
	PRETTY_NAME="Buildroot 2023.02.9"
	
	I0805 23:11:23.720191   28839 main.go:141] libmachine: found compatible host: buildroot
	I0805 23:11:23.720204   28839 main.go:141] libmachine: Provisioning with buildroot...
	I0805 23:11:23.720211   28839 main.go:141] libmachine: (ha-044175-m02) Calling .GetMachineName
	I0805 23:11:23.720443   28839 buildroot.go:166] provisioning hostname "ha-044175-m02"
	I0805 23:11:23.720464   28839 main.go:141] libmachine: (ha-044175-m02) Calling .GetMachineName
	I0805 23:11:23.720655   28839 main.go:141] libmachine: (ha-044175-m02) Calling .GetSSHHostname
	I0805 23:11:23.723447   28839 main.go:141] libmachine: (ha-044175-m02) DBG | domain ha-044175-m02 has defined MAC address 52:54:00:84:bb:47 in network mk-ha-044175
	I0805 23:11:23.723826   28839 main.go:141] libmachine: (ha-044175-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:84:bb:47", ip: ""} in network mk-ha-044175: {Iface:virbr1 ExpiryTime:2024-08-06 00:11:13 +0000 UTC Type:0 Mac:52:54:00:84:bb:47 Iaid: IPaddr:192.168.39.112 Prefix:24 Hostname:ha-044175-m02 Clientid:01:52:54:00:84:bb:47}
	I0805 23:11:23.723846   28839 main.go:141] libmachine: (ha-044175-m02) DBG | domain ha-044175-m02 has defined IP address 192.168.39.112 and MAC address 52:54:00:84:bb:47 in network mk-ha-044175
	I0805 23:11:23.724037   28839 main.go:141] libmachine: (ha-044175-m02) Calling .GetSSHPort
	I0805 23:11:23.724209   28839 main.go:141] libmachine: (ha-044175-m02) Calling .GetSSHKeyPath
	I0805 23:11:23.724357   28839 main.go:141] libmachine: (ha-044175-m02) Calling .GetSSHKeyPath
	I0805 23:11:23.724504   28839 main.go:141] libmachine: (ha-044175-m02) Calling .GetSSHUsername
	I0805 23:11:23.724673   28839 main.go:141] libmachine: Using SSH client type: native
	I0805 23:11:23.724850   28839 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.112 22 <nil> <nil>}
	I0805 23:11:23.724862   28839 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-044175-m02 && echo "ha-044175-m02" | sudo tee /etc/hostname
	I0805 23:11:23.848323   28839 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-044175-m02
	
	I0805 23:11:23.848360   28839 main.go:141] libmachine: (ha-044175-m02) Calling .GetSSHHostname
	I0805 23:11:23.851291   28839 main.go:141] libmachine: (ha-044175-m02) DBG | domain ha-044175-m02 has defined MAC address 52:54:00:84:bb:47 in network mk-ha-044175
	I0805 23:11:23.851733   28839 main.go:141] libmachine: (ha-044175-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:84:bb:47", ip: ""} in network mk-ha-044175: {Iface:virbr1 ExpiryTime:2024-08-06 00:11:13 +0000 UTC Type:0 Mac:52:54:00:84:bb:47 Iaid: IPaddr:192.168.39.112 Prefix:24 Hostname:ha-044175-m02 Clientid:01:52:54:00:84:bb:47}
	I0805 23:11:23.851771   28839 main.go:141] libmachine: (ha-044175-m02) DBG | domain ha-044175-m02 has defined IP address 192.168.39.112 and MAC address 52:54:00:84:bb:47 in network mk-ha-044175
	I0805 23:11:23.851921   28839 main.go:141] libmachine: (ha-044175-m02) Calling .GetSSHPort
	I0805 23:11:23.852127   28839 main.go:141] libmachine: (ha-044175-m02) Calling .GetSSHKeyPath
	I0805 23:11:23.852398   28839 main.go:141] libmachine: (ha-044175-m02) Calling .GetSSHKeyPath
	I0805 23:11:23.852545   28839 main.go:141] libmachine: (ha-044175-m02) Calling .GetSSHUsername
	I0805 23:11:23.852769   28839 main.go:141] libmachine: Using SSH client type: native
	I0805 23:11:23.852973   28839 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.112 22 <nil> <nil>}
	I0805 23:11:23.852990   28839 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-044175-m02' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-044175-m02/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-044175-m02' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0805 23:11:23.968097   28839 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0805 23:11:23.968135   28839 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19373-9606/.minikube CaCertPath:/home/jenkins/minikube-integration/19373-9606/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19373-9606/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19373-9606/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19373-9606/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19373-9606/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19373-9606/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19373-9606/.minikube}
	I0805 23:11:23.968155   28839 buildroot.go:174] setting up certificates
	I0805 23:11:23.968169   28839 provision.go:84] configureAuth start
	I0805 23:11:23.968178   28839 main.go:141] libmachine: (ha-044175-m02) Calling .GetMachineName
	I0805 23:11:23.968428   28839 main.go:141] libmachine: (ha-044175-m02) Calling .GetIP
	I0805 23:11:23.971503   28839 main.go:141] libmachine: (ha-044175-m02) DBG | domain ha-044175-m02 has defined MAC address 52:54:00:84:bb:47 in network mk-ha-044175
	I0805 23:11:23.971906   28839 main.go:141] libmachine: (ha-044175-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:84:bb:47", ip: ""} in network mk-ha-044175: {Iface:virbr1 ExpiryTime:2024-08-06 00:11:13 +0000 UTC Type:0 Mac:52:54:00:84:bb:47 Iaid: IPaddr:192.168.39.112 Prefix:24 Hostname:ha-044175-m02 Clientid:01:52:54:00:84:bb:47}
	I0805 23:11:23.971937   28839 main.go:141] libmachine: (ha-044175-m02) DBG | domain ha-044175-m02 has defined IP address 192.168.39.112 and MAC address 52:54:00:84:bb:47 in network mk-ha-044175
	I0805 23:11:23.972129   28839 main.go:141] libmachine: (ha-044175-m02) Calling .GetSSHHostname
	I0805 23:11:23.974453   28839 main.go:141] libmachine: (ha-044175-m02) DBG | domain ha-044175-m02 has defined MAC address 52:54:00:84:bb:47 in network mk-ha-044175
	I0805 23:11:23.974801   28839 main.go:141] libmachine: (ha-044175-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:84:bb:47", ip: ""} in network mk-ha-044175: {Iface:virbr1 ExpiryTime:2024-08-06 00:11:13 +0000 UTC Type:0 Mac:52:54:00:84:bb:47 Iaid: IPaddr:192.168.39.112 Prefix:24 Hostname:ha-044175-m02 Clientid:01:52:54:00:84:bb:47}
	I0805 23:11:23.974857   28839 main.go:141] libmachine: (ha-044175-m02) DBG | domain ha-044175-m02 has defined IP address 192.168.39.112 and MAC address 52:54:00:84:bb:47 in network mk-ha-044175
	I0805 23:11:23.974940   28839 provision.go:143] copyHostCerts
	I0805 23:11:23.974985   28839 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19373-9606/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/19373-9606/.minikube/ca.pem
	I0805 23:11:23.975026   28839 exec_runner.go:144] found /home/jenkins/minikube-integration/19373-9606/.minikube/ca.pem, removing ...
	I0805 23:11:23.975063   28839 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19373-9606/.minikube/ca.pem
	I0805 23:11:23.975147   28839 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19373-9606/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19373-9606/.minikube/ca.pem (1082 bytes)
	I0805 23:11:23.975240   28839 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19373-9606/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/19373-9606/.minikube/cert.pem
	I0805 23:11:23.975265   28839 exec_runner.go:144] found /home/jenkins/minikube-integration/19373-9606/.minikube/cert.pem, removing ...
	I0805 23:11:23.975275   28839 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19373-9606/.minikube/cert.pem
	I0805 23:11:23.975313   28839 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19373-9606/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19373-9606/.minikube/cert.pem (1123 bytes)
	I0805 23:11:23.975468   28839 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19373-9606/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/19373-9606/.minikube/key.pem
	I0805 23:11:23.975498   28839 exec_runner.go:144] found /home/jenkins/minikube-integration/19373-9606/.minikube/key.pem, removing ...
	I0805 23:11:23.975506   28839 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19373-9606/.minikube/key.pem
	I0805 23:11:23.975588   28839 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19373-9606/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19373-9606/.minikube/key.pem (1679 bytes)
	I0805 23:11:23.975686   28839 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19373-9606/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19373-9606/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19373-9606/.minikube/certs/ca-key.pem org=jenkins.ha-044175-m02 san=[127.0.0.1 192.168.39.112 ha-044175-m02 localhost minikube]
	I0805 23:11:24.361457   28839 provision.go:177] copyRemoteCerts
	I0805 23:11:24.361520   28839 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0805 23:11:24.361549   28839 main.go:141] libmachine: (ha-044175-m02) Calling .GetSSHHostname
	I0805 23:11:24.364017   28839 main.go:141] libmachine: (ha-044175-m02) DBG | domain ha-044175-m02 has defined MAC address 52:54:00:84:bb:47 in network mk-ha-044175
	I0805 23:11:24.364429   28839 main.go:141] libmachine: (ha-044175-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:84:bb:47", ip: ""} in network mk-ha-044175: {Iface:virbr1 ExpiryTime:2024-08-06 00:11:13 +0000 UTC Type:0 Mac:52:54:00:84:bb:47 Iaid: IPaddr:192.168.39.112 Prefix:24 Hostname:ha-044175-m02 Clientid:01:52:54:00:84:bb:47}
	I0805 23:11:24.364462   28839 main.go:141] libmachine: (ha-044175-m02) DBG | domain ha-044175-m02 has defined IP address 192.168.39.112 and MAC address 52:54:00:84:bb:47 in network mk-ha-044175
	I0805 23:11:24.364598   28839 main.go:141] libmachine: (ha-044175-m02) Calling .GetSSHPort
	I0805 23:11:24.364831   28839 main.go:141] libmachine: (ha-044175-m02) Calling .GetSSHKeyPath
	I0805 23:11:24.364995   28839 main.go:141] libmachine: (ha-044175-m02) Calling .GetSSHUsername
	I0805 23:11:24.365133   28839 sshutil.go:53] new ssh client: &{IP:192.168.39.112 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19373-9606/.minikube/machines/ha-044175-m02/id_rsa Username:docker}
	I0805 23:11:24.450484   28839 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19373-9606/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0805 23:11:24.450573   28839 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19373-9606/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0805 23:11:24.474841   28839 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19373-9606/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0805 23:11:24.474907   28839 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19373-9606/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0805 23:11:24.500328   28839 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19373-9606/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0805 23:11:24.500404   28839 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19373-9606/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I0805 23:11:24.523694   28839 provision.go:87] duration metric: took 555.511879ms to configureAuth
	I0805 23:11:24.523728   28839 buildroot.go:189] setting minikube options for container-runtime
	I0805 23:11:24.523925   28839 config.go:182] Loaded profile config "ha-044175": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0805 23:11:24.524010   28839 main.go:141] libmachine: (ha-044175-m02) Calling .GetSSHHostname
	I0805 23:11:24.526859   28839 main.go:141] libmachine: (ha-044175-m02) DBG | domain ha-044175-m02 has defined MAC address 52:54:00:84:bb:47 in network mk-ha-044175
	I0805 23:11:24.527274   28839 main.go:141] libmachine: (ha-044175-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:84:bb:47", ip: ""} in network mk-ha-044175: {Iface:virbr1 ExpiryTime:2024-08-06 00:11:13 +0000 UTC Type:0 Mac:52:54:00:84:bb:47 Iaid: IPaddr:192.168.39.112 Prefix:24 Hostname:ha-044175-m02 Clientid:01:52:54:00:84:bb:47}
	I0805 23:11:24.527303   28839 main.go:141] libmachine: (ha-044175-m02) DBG | domain ha-044175-m02 has defined IP address 192.168.39.112 and MAC address 52:54:00:84:bb:47 in network mk-ha-044175
	I0805 23:11:24.527546   28839 main.go:141] libmachine: (ha-044175-m02) Calling .GetSSHPort
	I0805 23:11:24.527754   28839 main.go:141] libmachine: (ha-044175-m02) Calling .GetSSHKeyPath
	I0805 23:11:24.527969   28839 main.go:141] libmachine: (ha-044175-m02) Calling .GetSSHKeyPath
	I0805 23:11:24.528126   28839 main.go:141] libmachine: (ha-044175-m02) Calling .GetSSHUsername
	I0805 23:11:24.528305   28839 main.go:141] libmachine: Using SSH client type: native
	I0805 23:11:24.528504   28839 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.112 22 <nil> <nil>}
	I0805 23:11:24.528526   28839 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0805 23:11:24.798971   28839 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0805 23:11:24.798996   28839 main.go:141] libmachine: Checking connection to Docker...
	I0805 23:11:24.799009   28839 main.go:141] libmachine: (ha-044175-m02) Calling .GetURL
	I0805 23:11:24.800534   28839 main.go:141] libmachine: (ha-044175-m02) DBG | Using libvirt version 6000000
	I0805 23:11:24.802798   28839 main.go:141] libmachine: (ha-044175-m02) DBG | domain ha-044175-m02 has defined MAC address 52:54:00:84:bb:47 in network mk-ha-044175
	I0805 23:11:24.803181   28839 main.go:141] libmachine: (ha-044175-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:84:bb:47", ip: ""} in network mk-ha-044175: {Iface:virbr1 ExpiryTime:2024-08-06 00:11:13 +0000 UTC Type:0 Mac:52:54:00:84:bb:47 Iaid: IPaddr:192.168.39.112 Prefix:24 Hostname:ha-044175-m02 Clientid:01:52:54:00:84:bb:47}
	I0805 23:11:24.803206   28839 main.go:141] libmachine: (ha-044175-m02) DBG | domain ha-044175-m02 has defined IP address 192.168.39.112 and MAC address 52:54:00:84:bb:47 in network mk-ha-044175
	I0805 23:11:24.803378   28839 main.go:141] libmachine: Docker is up and running!
	I0805 23:11:24.803392   28839 main.go:141] libmachine: Reticulating splines...
	I0805 23:11:24.803399   28839 client.go:171] duration metric: took 26.451679001s to LocalClient.Create
	I0805 23:11:24.803423   28839 start.go:167] duration metric: took 26.451737647s to libmachine.API.Create "ha-044175"
	I0805 23:11:24.803435   28839 start.go:293] postStartSetup for "ha-044175-m02" (driver="kvm2")
	I0805 23:11:24.803449   28839 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0805 23:11:24.803471   28839 main.go:141] libmachine: (ha-044175-m02) Calling .DriverName
	I0805 23:11:24.803743   28839 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0805 23:11:24.803766   28839 main.go:141] libmachine: (ha-044175-m02) Calling .GetSSHHostname
	I0805 23:11:24.806266   28839 main.go:141] libmachine: (ha-044175-m02) DBG | domain ha-044175-m02 has defined MAC address 52:54:00:84:bb:47 in network mk-ha-044175
	I0805 23:11:24.806678   28839 main.go:141] libmachine: (ha-044175-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:84:bb:47", ip: ""} in network mk-ha-044175: {Iface:virbr1 ExpiryTime:2024-08-06 00:11:13 +0000 UTC Type:0 Mac:52:54:00:84:bb:47 Iaid: IPaddr:192.168.39.112 Prefix:24 Hostname:ha-044175-m02 Clientid:01:52:54:00:84:bb:47}
	I0805 23:11:24.806706   28839 main.go:141] libmachine: (ha-044175-m02) DBG | domain ha-044175-m02 has defined IP address 192.168.39.112 and MAC address 52:54:00:84:bb:47 in network mk-ha-044175
	I0805 23:11:24.806848   28839 main.go:141] libmachine: (ha-044175-m02) Calling .GetSSHPort
	I0805 23:11:24.807027   28839 main.go:141] libmachine: (ha-044175-m02) Calling .GetSSHKeyPath
	I0805 23:11:24.807171   28839 main.go:141] libmachine: (ha-044175-m02) Calling .GetSSHUsername
	I0805 23:11:24.807342   28839 sshutil.go:53] new ssh client: &{IP:192.168.39.112 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19373-9606/.minikube/machines/ha-044175-m02/id_rsa Username:docker}
	I0805 23:11:24.890641   28839 ssh_runner.go:195] Run: cat /etc/os-release
	I0805 23:11:24.894903   28839 info.go:137] Remote host: Buildroot 2023.02.9
	I0805 23:11:24.894935   28839 filesync.go:126] Scanning /home/jenkins/minikube-integration/19373-9606/.minikube/addons for local assets ...
	I0805 23:11:24.895008   28839 filesync.go:126] Scanning /home/jenkins/minikube-integration/19373-9606/.minikube/files for local assets ...
	I0805 23:11:24.895124   28839 filesync.go:149] local asset: /home/jenkins/minikube-integration/19373-9606/.minikube/files/etc/ssl/certs/167922.pem -> 167922.pem in /etc/ssl/certs
	I0805 23:11:24.895136   28839 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19373-9606/.minikube/files/etc/ssl/certs/167922.pem -> /etc/ssl/certs/167922.pem
	I0805 23:11:24.895242   28839 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0805 23:11:24.904881   28839 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19373-9606/.minikube/files/etc/ssl/certs/167922.pem --> /etc/ssl/certs/167922.pem (1708 bytes)
	I0805 23:11:24.929078   28839 start.go:296] duration metric: took 125.629214ms for postStartSetup
	I0805 23:11:24.929143   28839 main.go:141] libmachine: (ha-044175-m02) Calling .GetConfigRaw
	I0805 23:11:24.929870   28839 main.go:141] libmachine: (ha-044175-m02) Calling .GetIP
	I0805 23:11:24.933181   28839 main.go:141] libmachine: (ha-044175-m02) DBG | domain ha-044175-m02 has defined MAC address 52:54:00:84:bb:47 in network mk-ha-044175
	I0805 23:11:24.933617   28839 main.go:141] libmachine: (ha-044175-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:84:bb:47", ip: ""} in network mk-ha-044175: {Iface:virbr1 ExpiryTime:2024-08-06 00:11:13 +0000 UTC Type:0 Mac:52:54:00:84:bb:47 Iaid: IPaddr:192.168.39.112 Prefix:24 Hostname:ha-044175-m02 Clientid:01:52:54:00:84:bb:47}
	I0805 23:11:24.933650   28839 main.go:141] libmachine: (ha-044175-m02) DBG | domain ha-044175-m02 has defined IP address 192.168.39.112 and MAC address 52:54:00:84:bb:47 in network mk-ha-044175
	I0805 23:11:24.933916   28839 profile.go:143] Saving config to /home/jenkins/minikube-integration/19373-9606/.minikube/profiles/ha-044175/config.json ...
	I0805 23:11:24.934200   28839 start.go:128] duration metric: took 26.601256424s to createHost
	I0805 23:11:24.934240   28839 main.go:141] libmachine: (ha-044175-m02) Calling .GetSSHHostname
	I0805 23:11:24.936916   28839 main.go:141] libmachine: (ha-044175-m02) DBG | domain ha-044175-m02 has defined MAC address 52:54:00:84:bb:47 in network mk-ha-044175
	I0805 23:11:24.937307   28839 main.go:141] libmachine: (ha-044175-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:84:bb:47", ip: ""} in network mk-ha-044175: {Iface:virbr1 ExpiryTime:2024-08-06 00:11:13 +0000 UTC Type:0 Mac:52:54:00:84:bb:47 Iaid: IPaddr:192.168.39.112 Prefix:24 Hostname:ha-044175-m02 Clientid:01:52:54:00:84:bb:47}
	I0805 23:11:24.937334   28839 main.go:141] libmachine: (ha-044175-m02) DBG | domain ha-044175-m02 has defined IP address 192.168.39.112 and MAC address 52:54:00:84:bb:47 in network mk-ha-044175
	I0805 23:11:24.937478   28839 main.go:141] libmachine: (ha-044175-m02) Calling .GetSSHPort
	I0805 23:11:24.937663   28839 main.go:141] libmachine: (ha-044175-m02) Calling .GetSSHKeyPath
	I0805 23:11:24.937851   28839 main.go:141] libmachine: (ha-044175-m02) Calling .GetSSHKeyPath
	I0805 23:11:24.938028   28839 main.go:141] libmachine: (ha-044175-m02) Calling .GetSSHUsername
	I0805 23:11:24.938195   28839 main.go:141] libmachine: Using SSH client type: native
	I0805 23:11:24.938370   28839 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.112 22 <nil> <nil>}
	I0805 23:11:24.938382   28839 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0805 23:11:25.047998   28839 main.go:141] libmachine: SSH cmd err, output: <nil>: 1722899485.026896819
	
	I0805 23:11:25.048024   28839 fix.go:216] guest clock: 1722899485.026896819
	I0805 23:11:25.048036   28839 fix.go:229] Guest: 2024-08-05 23:11:25.026896819 +0000 UTC Remote: 2024-08-05 23:11:24.934222067 +0000 UTC m=+84.250210200 (delta=92.674752ms)
	I0805 23:11:25.048082   28839 fix.go:200] guest clock delta is within tolerance: 92.674752ms
	I0805 23:11:25.048092   28839 start.go:83] releasing machines lock for "ha-044175-m02", held for 26.715325803s
	I0805 23:11:25.048117   28839 main.go:141] libmachine: (ha-044175-m02) Calling .DriverName
	I0805 23:11:25.048440   28839 main.go:141] libmachine: (ha-044175-m02) Calling .GetIP
	I0805 23:11:25.051622   28839 main.go:141] libmachine: (ha-044175-m02) DBG | domain ha-044175-m02 has defined MAC address 52:54:00:84:bb:47 in network mk-ha-044175
	I0805 23:11:25.052002   28839 main.go:141] libmachine: (ha-044175-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:84:bb:47", ip: ""} in network mk-ha-044175: {Iface:virbr1 ExpiryTime:2024-08-06 00:11:13 +0000 UTC Type:0 Mac:52:54:00:84:bb:47 Iaid: IPaddr:192.168.39.112 Prefix:24 Hostname:ha-044175-m02 Clientid:01:52:54:00:84:bb:47}
	I0805 23:11:25.052027   28839 main.go:141] libmachine: (ha-044175-m02) DBG | domain ha-044175-m02 has defined IP address 192.168.39.112 and MAC address 52:54:00:84:bb:47 in network mk-ha-044175
	I0805 23:11:25.054434   28839 out.go:177] * Found network options:
	I0805 23:11:25.055807   28839 out.go:177]   - NO_PROXY=192.168.39.57
	W0805 23:11:25.057028   28839 proxy.go:119] fail to check proxy env: Error ip not in block
	I0805 23:11:25.057057   28839 main.go:141] libmachine: (ha-044175-m02) Calling .DriverName
	I0805 23:11:25.057686   28839 main.go:141] libmachine: (ha-044175-m02) Calling .DriverName
	I0805 23:11:25.057893   28839 main.go:141] libmachine: (ha-044175-m02) Calling .DriverName
	I0805 23:11:25.057955   28839 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0805 23:11:25.058002   28839 main.go:141] libmachine: (ha-044175-m02) Calling .GetSSHHostname
	W0805 23:11:25.058072   28839 proxy.go:119] fail to check proxy env: Error ip not in block
	I0805 23:11:25.058136   28839 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0805 23:11:25.058149   28839 main.go:141] libmachine: (ha-044175-m02) Calling .GetSSHHostname
	I0805 23:11:25.060690   28839 main.go:141] libmachine: (ha-044175-m02) DBG | domain ha-044175-m02 has defined MAC address 52:54:00:84:bb:47 in network mk-ha-044175
	I0805 23:11:25.060938   28839 main.go:141] libmachine: (ha-044175-m02) DBG | domain ha-044175-m02 has defined MAC address 52:54:00:84:bb:47 in network mk-ha-044175
	I0805 23:11:25.061131   28839 main.go:141] libmachine: (ha-044175-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:84:bb:47", ip: ""} in network mk-ha-044175: {Iface:virbr1 ExpiryTime:2024-08-06 00:11:13 +0000 UTC Type:0 Mac:52:54:00:84:bb:47 Iaid: IPaddr:192.168.39.112 Prefix:24 Hostname:ha-044175-m02 Clientid:01:52:54:00:84:bb:47}
	I0805 23:11:25.061156   28839 main.go:141] libmachine: (ha-044175-m02) DBG | domain ha-044175-m02 has defined IP address 192.168.39.112 and MAC address 52:54:00:84:bb:47 in network mk-ha-044175
	I0805 23:11:25.061313   28839 main.go:141] libmachine: (ha-044175-m02) Calling .GetSSHPort
	I0805 23:11:25.061437   28839 main.go:141] libmachine: (ha-044175-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:84:bb:47", ip: ""} in network mk-ha-044175: {Iface:virbr1 ExpiryTime:2024-08-06 00:11:13 +0000 UTC Type:0 Mac:52:54:00:84:bb:47 Iaid: IPaddr:192.168.39.112 Prefix:24 Hostname:ha-044175-m02 Clientid:01:52:54:00:84:bb:47}
	I0805 23:11:25.061460   28839 main.go:141] libmachine: (ha-044175-m02) DBG | domain ha-044175-m02 has defined IP address 192.168.39.112 and MAC address 52:54:00:84:bb:47 in network mk-ha-044175
	I0805 23:11:25.061476   28839 main.go:141] libmachine: (ha-044175-m02) Calling .GetSSHKeyPath
	I0805 23:11:25.061626   28839 main.go:141] libmachine: (ha-044175-m02) Calling .GetSSHPort
	I0805 23:11:25.061632   28839 main.go:141] libmachine: (ha-044175-m02) Calling .GetSSHUsername
	I0805 23:11:25.061803   28839 main.go:141] libmachine: (ha-044175-m02) Calling .GetSSHKeyPath
	I0805 23:11:25.061794   28839 sshutil.go:53] new ssh client: &{IP:192.168.39.112 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19373-9606/.minikube/machines/ha-044175-m02/id_rsa Username:docker}
	I0805 23:11:25.061951   28839 main.go:141] libmachine: (ha-044175-m02) Calling .GetSSHUsername
	I0805 23:11:25.062141   28839 sshutil.go:53] new ssh client: &{IP:192.168.39.112 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19373-9606/.minikube/machines/ha-044175-m02/id_rsa Username:docker}
	I0805 23:11:25.299725   28839 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0805 23:11:25.306174   28839 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0805 23:11:25.306242   28839 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0805 23:11:25.322685   28839 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0805 23:11:25.322706   28839 start.go:495] detecting cgroup driver to use...
	I0805 23:11:25.322785   28839 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0805 23:11:25.339357   28839 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0805 23:11:25.354693   28839 docker.go:217] disabling cri-docker service (if available) ...
	I0805 23:11:25.354757   28839 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0805 23:11:25.369378   28839 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0805 23:11:25.384906   28839 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0805 23:11:25.515594   28839 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0805 23:11:25.686718   28839 docker.go:233] disabling docker service ...
	I0805 23:11:25.686825   28839 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0805 23:11:25.702675   28839 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0805 23:11:25.716283   28839 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0805 23:11:25.850322   28839 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0805 23:11:25.974241   28839 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0805 23:11:25.989237   28839 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0805 23:11:26.008400   28839 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I0805 23:11:26.008467   28839 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0805 23:11:26.019119   28839 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0805 23:11:26.019203   28839 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0805 23:11:26.030550   28839 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0805 23:11:26.042068   28839 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0805 23:11:26.052855   28839 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0805 23:11:26.063581   28839 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0805 23:11:26.073993   28839 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0805 23:11:26.090786   28839 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0805 23:11:26.102178   28839 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0805 23:11:26.113041   28839 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0805 23:11:26.113101   28839 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0805 23:11:26.128071   28839 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0805 23:11:26.139676   28839 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0805 23:11:26.267911   28839 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0805 23:11:26.406809   28839 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0805 23:11:26.406876   28839 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0805 23:11:26.411835   28839 start.go:563] Will wait 60s for crictl version
	I0805 23:11:26.411902   28839 ssh_runner.go:195] Run: which crictl
	I0805 23:11:26.416003   28839 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0805 23:11:26.455739   28839 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0805 23:11:26.455808   28839 ssh_runner.go:195] Run: crio --version
	I0805 23:11:26.486871   28839 ssh_runner.go:195] Run: crio --version
	I0805 23:11:26.518231   28839 out.go:177] * Preparing Kubernetes v1.30.3 on CRI-O 1.29.1 ...
	I0805 23:11:26.519697   28839 out.go:177]   - env NO_PROXY=192.168.39.57
	I0805 23:11:26.521151   28839 main.go:141] libmachine: (ha-044175-m02) Calling .GetIP
	I0805 23:11:26.524244   28839 main.go:141] libmachine: (ha-044175-m02) DBG | domain ha-044175-m02 has defined MAC address 52:54:00:84:bb:47 in network mk-ha-044175
	I0805 23:11:26.524712   28839 main.go:141] libmachine: (ha-044175-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:84:bb:47", ip: ""} in network mk-ha-044175: {Iface:virbr1 ExpiryTime:2024-08-06 00:11:13 +0000 UTC Type:0 Mac:52:54:00:84:bb:47 Iaid: IPaddr:192.168.39.112 Prefix:24 Hostname:ha-044175-m02 Clientid:01:52:54:00:84:bb:47}
	I0805 23:11:26.524738   28839 main.go:141] libmachine: (ha-044175-m02) DBG | domain ha-044175-m02 has defined IP address 192.168.39.112 and MAC address 52:54:00:84:bb:47 in network mk-ha-044175
	I0805 23:11:26.524958   28839 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0805 23:11:26.529501   28839 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0805 23:11:26.542765   28839 mustload.go:65] Loading cluster: ha-044175
	I0805 23:11:26.542991   28839 config.go:182] Loaded profile config "ha-044175": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0805 23:11:26.543340   28839 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0805 23:11:26.543377   28839 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0805 23:11:26.557439   28839 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43339
	I0805 23:11:26.557872   28839 main.go:141] libmachine: () Calling .GetVersion
	I0805 23:11:26.558273   28839 main.go:141] libmachine: Using API Version  1
	I0805 23:11:26.558294   28839 main.go:141] libmachine: () Calling .SetConfigRaw
	I0805 23:11:26.558592   28839 main.go:141] libmachine: () Calling .GetMachineName
	I0805 23:11:26.558775   28839 main.go:141] libmachine: (ha-044175) Calling .GetState
	I0805 23:11:26.560260   28839 host.go:66] Checking if "ha-044175" exists ...
	I0805 23:11:26.560572   28839 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0805 23:11:26.560616   28839 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0805 23:11:26.575601   28839 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37281
	I0805 23:11:26.575998   28839 main.go:141] libmachine: () Calling .GetVersion
	I0805 23:11:26.576408   28839 main.go:141] libmachine: Using API Version  1
	I0805 23:11:26.576432   28839 main.go:141] libmachine: () Calling .SetConfigRaw
	I0805 23:11:26.576748   28839 main.go:141] libmachine: () Calling .GetMachineName
	I0805 23:11:26.576917   28839 main.go:141] libmachine: (ha-044175) Calling .DriverName
	I0805 23:11:26.577091   28839 certs.go:68] Setting up /home/jenkins/minikube-integration/19373-9606/.minikube/profiles/ha-044175 for IP: 192.168.39.112
	I0805 23:11:26.577107   28839 certs.go:194] generating shared ca certs ...
	I0805 23:11:26.577126   28839 certs.go:226] acquiring lock for ca certs: {Name:mkf35a042c1656d191f542eee7fa087aad4d29d8 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0805 23:11:26.577263   28839 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19373-9606/.minikube/ca.key
	I0805 23:11:26.577301   28839 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19373-9606/.minikube/proxy-client-ca.key
	I0805 23:11:26.577310   28839 certs.go:256] generating profile certs ...
	I0805 23:11:26.577379   28839 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19373-9606/.minikube/profiles/ha-044175/client.key
	I0805 23:11:26.577402   28839 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/19373-9606/.minikube/profiles/ha-044175/apiserver.key.ad18f62e
	I0805 23:11:26.577418   28839 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19373-9606/.minikube/profiles/ha-044175/apiserver.crt.ad18f62e with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.39.57 192.168.39.112 192.168.39.254]
	I0805 23:11:26.637767   28839 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19373-9606/.minikube/profiles/ha-044175/apiserver.crt.ad18f62e ...
	I0805 23:11:26.637796   28839 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19373-9606/.minikube/profiles/ha-044175/apiserver.crt.ad18f62e: {Name:mkad1ee795bff5c5d74c9f4f3dd96dcf784d053b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0805 23:11:26.637952   28839 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19373-9606/.minikube/profiles/ha-044175/apiserver.key.ad18f62e ...
	I0805 23:11:26.637964   28839 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19373-9606/.minikube/profiles/ha-044175/apiserver.key.ad18f62e: {Name:mk035a446b2e7691a651da6b4b78721fdb2a6d0c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0805 23:11:26.638029   28839 certs.go:381] copying /home/jenkins/minikube-integration/19373-9606/.minikube/profiles/ha-044175/apiserver.crt.ad18f62e -> /home/jenkins/minikube-integration/19373-9606/.minikube/profiles/ha-044175/apiserver.crt
	I0805 23:11:26.638159   28839 certs.go:385] copying /home/jenkins/minikube-integration/19373-9606/.minikube/profiles/ha-044175/apiserver.key.ad18f62e -> /home/jenkins/minikube-integration/19373-9606/.minikube/profiles/ha-044175/apiserver.key
	I0805 23:11:26.638287   28839 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19373-9606/.minikube/profiles/ha-044175/proxy-client.key
	I0805 23:11:26.638301   28839 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19373-9606/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I0805 23:11:26.638314   28839 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19373-9606/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I0805 23:11:26.638327   28839 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19373-9606/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0805 23:11:26.638339   28839 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19373-9606/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0805 23:11:26.638352   28839 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19373-9606/.minikube/profiles/ha-044175/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0805 23:11:26.638365   28839 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19373-9606/.minikube/profiles/ha-044175/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0805 23:11:26.638376   28839 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19373-9606/.minikube/profiles/ha-044175/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0805 23:11:26.638388   28839 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19373-9606/.minikube/profiles/ha-044175/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0805 23:11:26.638436   28839 certs.go:484] found cert: /home/jenkins/minikube-integration/19373-9606/.minikube/certs/16792.pem (1338 bytes)
	W0805 23:11:26.638475   28839 certs.go:480] ignoring /home/jenkins/minikube-integration/19373-9606/.minikube/certs/16792_empty.pem, impossibly tiny 0 bytes
	I0805 23:11:26.638483   28839 certs.go:484] found cert: /home/jenkins/minikube-integration/19373-9606/.minikube/certs/ca-key.pem (1679 bytes)
	I0805 23:11:26.638513   28839 certs.go:484] found cert: /home/jenkins/minikube-integration/19373-9606/.minikube/certs/ca.pem (1082 bytes)
	I0805 23:11:26.638543   28839 certs.go:484] found cert: /home/jenkins/minikube-integration/19373-9606/.minikube/certs/cert.pem (1123 bytes)
	I0805 23:11:26.638580   28839 certs.go:484] found cert: /home/jenkins/minikube-integration/19373-9606/.minikube/certs/key.pem (1679 bytes)
	I0805 23:11:26.638635   28839 certs.go:484] found cert: /home/jenkins/minikube-integration/19373-9606/.minikube/files/etc/ssl/certs/167922.pem (1708 bytes)
	I0805 23:11:26.638673   28839 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19373-9606/.minikube/files/etc/ssl/certs/167922.pem -> /usr/share/ca-certificates/167922.pem
	I0805 23:11:26.638696   28839 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19373-9606/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0805 23:11:26.638715   28839 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19373-9606/.minikube/certs/16792.pem -> /usr/share/ca-certificates/16792.pem
	I0805 23:11:26.638759   28839 main.go:141] libmachine: (ha-044175) Calling .GetSSHHostname
	I0805 23:11:26.641680   28839 main.go:141] libmachine: (ha-044175) DBG | domain ha-044175 has defined MAC address 52:54:00:d0:5f:e4 in network mk-ha-044175
	I0805 23:11:26.642048   28839 main.go:141] libmachine: (ha-044175) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d0:5f:e4", ip: ""} in network mk-ha-044175: {Iface:virbr1 ExpiryTime:2024-08-06 00:10:15 +0000 UTC Type:0 Mac:52:54:00:d0:5f:e4 Iaid: IPaddr:192.168.39.57 Prefix:24 Hostname:ha-044175 Clientid:01:52:54:00:d0:5f:e4}
	I0805 23:11:26.642082   28839 main.go:141] libmachine: (ha-044175) DBG | domain ha-044175 has defined IP address 192.168.39.57 and MAC address 52:54:00:d0:5f:e4 in network mk-ha-044175
	I0805 23:11:26.642240   28839 main.go:141] libmachine: (ha-044175) Calling .GetSSHPort
	I0805 23:11:26.642460   28839 main.go:141] libmachine: (ha-044175) Calling .GetSSHKeyPath
	I0805 23:11:26.642609   28839 main.go:141] libmachine: (ha-044175) Calling .GetSSHUsername
	I0805 23:11:26.642730   28839 sshutil.go:53] new ssh client: &{IP:192.168.39.57 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19373-9606/.minikube/machines/ha-044175/id_rsa Username:docker}
	I0805 23:11:26.711456   28839 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/sa.pub
	I0805 23:11:26.718679   28839 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.pub --> memory (451 bytes)
	I0805 23:11:26.739835   28839 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/sa.key
	I0805 23:11:26.744682   28839 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.key --> memory (1675 bytes)
	I0805 23:11:26.757766   28839 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/front-proxy-ca.crt
	I0805 23:11:26.762276   28839 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.crt --> memory (1123 bytes)
	I0805 23:11:26.773825   28839 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/front-proxy-ca.key
	I0805 23:11:26.778500   28839 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.key --> memory (1675 bytes)
	I0805 23:11:26.789799   28839 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/etcd/ca.crt
	I0805 23:11:26.794170   28839 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.crt --> memory (1094 bytes)
	I0805 23:11:26.806013   28839 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/etcd/ca.key
	I0805 23:11:26.810492   28839 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.key --> memory (1679 bytes)
	I0805 23:11:26.821952   28839 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19373-9606/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0805 23:11:26.848184   28839 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19373-9606/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0805 23:11:26.872528   28839 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19373-9606/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0805 23:11:26.897402   28839 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19373-9606/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0805 23:11:26.925593   28839 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19373-9606/.minikube/profiles/ha-044175/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1436 bytes)
	I0805 23:11:26.953348   28839 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19373-9606/.minikube/profiles/ha-044175/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0805 23:11:26.977810   28839 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19373-9606/.minikube/profiles/ha-044175/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0805 23:11:27.002673   28839 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19373-9606/.minikube/profiles/ha-044175/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0805 23:11:27.027853   28839 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19373-9606/.minikube/files/etc/ssl/certs/167922.pem --> /usr/share/ca-certificates/167922.pem (1708 bytes)
	I0805 23:11:27.052873   28839 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19373-9606/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0805 23:11:27.077190   28839 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19373-9606/.minikube/certs/16792.pem --> /usr/share/ca-certificates/16792.pem (1338 bytes)
	I0805 23:11:27.103532   28839 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.pub (451 bytes)
	I0805 23:11:27.120876   28839 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.key (1675 bytes)
	I0805 23:11:27.138371   28839 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.crt (1123 bytes)
	I0805 23:11:27.155459   28839 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.key (1675 bytes)
	I0805 23:11:27.172538   28839 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.crt (1094 bytes)
	I0805 23:11:27.189842   28839 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.key (1679 bytes)
	I0805 23:11:27.207488   28839 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (744 bytes)
	I0805 23:11:27.224852   28839 ssh_runner.go:195] Run: openssl version
	I0805 23:11:27.230683   28839 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/167922.pem && ln -fs /usr/share/ca-certificates/167922.pem /etc/ssl/certs/167922.pem"
	I0805 23:11:27.241747   28839 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/167922.pem
	I0805 23:11:27.246552   28839 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Aug  5 23:03 /usr/share/ca-certificates/167922.pem
	I0805 23:11:27.246620   28839 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/167922.pem
	I0805 23:11:27.252853   28839 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/167922.pem /etc/ssl/certs/3ec20f2e.0"
	I0805 23:11:27.264243   28839 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0805 23:11:27.275787   28839 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0805 23:11:27.280638   28839 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Aug  5 22:50 /usr/share/ca-certificates/minikubeCA.pem
	I0805 23:11:27.280702   28839 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0805 23:11:27.286790   28839 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0805 23:11:27.298786   28839 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/16792.pem && ln -fs /usr/share/ca-certificates/16792.pem /etc/ssl/certs/16792.pem"
	I0805 23:11:27.310297   28839 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/16792.pem
	I0805 23:11:27.315450   28839 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Aug  5 23:03 /usr/share/ca-certificates/16792.pem
	I0805 23:11:27.315514   28839 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/16792.pem
	I0805 23:11:27.321473   28839 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/16792.pem /etc/ssl/certs/51391683.0"
	I0805 23:11:27.332363   28839 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0805 23:11:27.336935   28839 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0805 23:11:27.336993   28839 kubeadm.go:934] updating node {m02 192.168.39.112 8443 v1.30.3 crio true true} ...
	I0805 23:11:27.337095   28839 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.30.3/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=ha-044175-m02 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.112
	
	[Install]
	 config:
	{KubernetesVersion:v1.30.3 ClusterName:ha-044175 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0805 23:11:27.337124   28839 kube-vip.go:115] generating kube-vip config ...
	I0805 23:11:27.337161   28839 ssh_runner.go:195] Run: sudo sh -c "modprobe --all ip_vs ip_vs_rr ip_vs_wrr ip_vs_sh nf_conntrack"
	I0805 23:11:27.354765   28839 kube-vip.go:167] auto-enabling control-plane load-balancing in kube-vip
	I0805 23:11:27.354834   28839 kube-vip.go:137] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_nodename
	      valueFrom:
	        fieldRef:
	          fieldPath: spec.nodeName
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 192.168.39.254
	    - name: prometheus_server
	      value: :2112
	    - name : lb_enable
	      value: "true"
	    - name: lb_port
	      value: "8443"
	    image: ghcr.io/kube-vip/kube-vip:v0.8.0
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/admin.conf"
	    name: kubeconfig
	status: {}
	I0805 23:11:27.354891   28839 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.30.3
	I0805 23:11:27.365652   28839 binaries.go:47] Didn't find k8s binaries: sudo ls /var/lib/minikube/binaries/v1.30.3: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/binaries/v1.30.3': No such file or directory
	
	Initiating transfer...
	I0805 23:11:27.365704   28839 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/binaries/v1.30.3
	I0805 23:11:27.375939   28839 download.go:107] Downloading: https://dl.k8s.io/release/v1.30.3/bin/linux/amd64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.30.3/bin/linux/amd64/kubeadm.sha256 -> /home/jenkins/minikube-integration/19373-9606/.minikube/cache/linux/amd64/v1.30.3/kubeadm
	I0805 23:11:27.375939   28839 download.go:107] Downloading: https://dl.k8s.io/release/v1.30.3/bin/linux/amd64/kubelet?checksum=file:https://dl.k8s.io/release/v1.30.3/bin/linux/amd64/kubelet.sha256 -> /home/jenkins/minikube-integration/19373-9606/.minikube/cache/linux/amd64/v1.30.3/kubelet
	I0805 23:11:27.375940   28839 binary.go:74] Not caching binary, using https://dl.k8s.io/release/v1.30.3/bin/linux/amd64/kubectl?checksum=file:https://dl.k8s.io/release/v1.30.3/bin/linux/amd64/kubectl.sha256
	I0805 23:11:27.376110   28839 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19373-9606/.minikube/cache/linux/amd64/v1.30.3/kubectl -> /var/lib/minikube/binaries/v1.30.3/kubectl
	I0805 23:11:27.376208   28839 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.30.3/kubectl
	I0805 23:11:27.382613   28839 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.30.3/kubectl: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.30.3/kubectl: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.30.3/kubectl': No such file or directory
	I0805 23:11:27.382649   28839 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19373-9606/.minikube/cache/linux/amd64/v1.30.3/kubectl --> /var/lib/minikube/binaries/v1.30.3/kubectl (51454104 bytes)
	I0805 23:11:28.470234   28839 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19373-9606/.minikube/cache/linux/amd64/v1.30.3/kubeadm -> /var/lib/minikube/binaries/v1.30.3/kubeadm
	I0805 23:11:28.470313   28839 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.30.3/kubeadm
	I0805 23:11:28.475604   28839 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.30.3/kubeadm: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.30.3/kubeadm: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.30.3/kubeadm': No such file or directory
	I0805 23:11:28.475649   28839 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19373-9606/.minikube/cache/linux/amd64/v1.30.3/kubeadm --> /var/lib/minikube/binaries/v1.30.3/kubeadm (50249880 bytes)
	I0805 23:11:28.917910   28839 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0805 23:11:28.932927   28839 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19373-9606/.minikube/cache/linux/amd64/v1.30.3/kubelet -> /var/lib/minikube/binaries/v1.30.3/kubelet
	I0805 23:11:28.933011   28839 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.30.3/kubelet
	I0805 23:11:28.937444   28839 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.30.3/kubelet: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.30.3/kubelet: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.30.3/kubelet': No such file or directory
	I0805 23:11:28.937481   28839 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19373-9606/.minikube/cache/linux/amd64/v1.30.3/kubelet --> /var/lib/minikube/binaries/v1.30.3/kubelet (100125080 bytes)
	I0805 23:11:29.356509   28839 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /etc/kubernetes/manifests
	I0805 23:11:29.366545   28839 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (313 bytes)
	I0805 23:11:29.383465   28839 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0805 23:11:29.400422   28839 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1441 bytes)
	I0805 23:11:29.417598   28839 ssh_runner.go:195] Run: grep 192.168.39.254	control-plane.minikube.internal$ /etc/hosts
	I0805 23:11:29.422348   28839 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.254	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0805 23:11:29.435838   28839 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0805 23:11:29.557695   28839 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0805 23:11:29.576202   28839 host.go:66] Checking if "ha-044175" exists ...
	I0805 23:11:29.576670   28839 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0805 23:11:29.576714   28839 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0805 23:11:29.591867   28839 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45075
	I0805 23:11:29.592430   28839 main.go:141] libmachine: () Calling .GetVersion
	I0805 23:11:29.592950   28839 main.go:141] libmachine: Using API Version  1
	I0805 23:11:29.592969   28839 main.go:141] libmachine: () Calling .SetConfigRaw
	I0805 23:11:29.593276   28839 main.go:141] libmachine: () Calling .GetMachineName
	I0805 23:11:29.593479   28839 main.go:141] libmachine: (ha-044175) Calling .DriverName
	I0805 23:11:29.593607   28839 start.go:317] joinCluster: &{Name:ha-044175 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19339/minikube-v1.33.1-1722248113-19339-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.3 Cluster
Name:ha-044175 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.57 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.112 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration
:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0805 23:11:29.593717   28839 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm token create --print-join-command --ttl=0"
	I0805 23:11:29.593739   28839 main.go:141] libmachine: (ha-044175) Calling .GetSSHHostname
	I0805 23:11:29.597339   28839 main.go:141] libmachine: (ha-044175) DBG | domain ha-044175 has defined MAC address 52:54:00:d0:5f:e4 in network mk-ha-044175
	I0805 23:11:29.597799   28839 main.go:141] libmachine: (ha-044175) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d0:5f:e4", ip: ""} in network mk-ha-044175: {Iface:virbr1 ExpiryTime:2024-08-06 00:10:15 +0000 UTC Type:0 Mac:52:54:00:d0:5f:e4 Iaid: IPaddr:192.168.39.57 Prefix:24 Hostname:ha-044175 Clientid:01:52:54:00:d0:5f:e4}
	I0805 23:11:29.597825   28839 main.go:141] libmachine: (ha-044175) DBG | domain ha-044175 has defined IP address 192.168.39.57 and MAC address 52:54:00:d0:5f:e4 in network mk-ha-044175
	I0805 23:11:29.598007   28839 main.go:141] libmachine: (ha-044175) Calling .GetSSHPort
	I0805 23:11:29.598215   28839 main.go:141] libmachine: (ha-044175) Calling .GetSSHKeyPath
	I0805 23:11:29.598389   28839 main.go:141] libmachine: (ha-044175) Calling .GetSSHUsername
	I0805 23:11:29.598524   28839 sshutil.go:53] new ssh client: &{IP:192.168.39.57 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19373-9606/.minikube/machines/ha-044175/id_rsa Username:docker}
	I0805 23:11:29.760365   28839 start.go:343] trying to join control-plane node "m02" to cluster: &{Name:m02 IP:192.168.39.112 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0805 23:11:29.760406   28839 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm join control-plane.minikube.internal:8443 --token 9e2ce2.maogyyg7kfbeyj3n --discovery-token-ca-cert-hash sha256:80c3f4a7caafd825f47d5f536053424d1d775e8da247cc5594b6b717e711fcd3 --ignore-preflight-errors=all --cri-socket unix:///var/run/crio/crio.sock --node-name=ha-044175-m02 --control-plane --apiserver-advertise-address=192.168.39.112 --apiserver-bind-port=8443"
	I0805 23:11:51.894631   28839 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm join control-plane.minikube.internal:8443 --token 9e2ce2.maogyyg7kfbeyj3n --discovery-token-ca-cert-hash sha256:80c3f4a7caafd825f47d5f536053424d1d775e8da247cc5594b6b717e711fcd3 --ignore-preflight-errors=all --cri-socket unix:///var/run/crio/crio.sock --node-name=ha-044175-m02 --control-plane --apiserver-advertise-address=192.168.39.112 --apiserver-bind-port=8443": (22.134160621s)
	I0805 23:11:51.894696   28839 ssh_runner.go:195] Run: /bin/bash -c "sudo systemctl daemon-reload && sudo systemctl enable kubelet && sudo systemctl start kubelet"
	I0805 23:11:52.474720   28839 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes ha-044175-m02 minikube.k8s.io/updated_at=2024_08_05T23_11_52_0700 minikube.k8s.io/version=v1.33.1 minikube.k8s.io/commit=a179f531dd2dbe55e0d6074abcbc378280f91bb4 minikube.k8s.io/name=ha-044175 minikube.k8s.io/primary=false
	I0805 23:11:52.614587   28839 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig taint nodes ha-044175-m02 node-role.kubernetes.io/control-plane:NoSchedule-
	I0805 23:11:52.799520   28839 start.go:319] duration metric: took 23.205908074s to joinCluster
	I0805 23:11:52.799617   28839 start.go:235] Will wait 6m0s for node &{Name:m02 IP:192.168.39.112 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0805 23:11:52.799937   28839 config.go:182] Loaded profile config "ha-044175": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0805 23:11:52.801375   28839 out.go:177] * Verifying Kubernetes components...
	I0805 23:11:52.802951   28839 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0805 23:11:53.098436   28839 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0805 23:11:53.120645   28839 loader.go:395] Config loaded from file:  /home/jenkins/minikube-integration/19373-9606/kubeconfig
	I0805 23:11:53.120920   28839 kapi.go:59] client config for ha-044175: &rest.Config{Host:"https://192.168.39.254:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/19373-9606/.minikube/profiles/ha-044175/client.crt", KeyFile:"/home/jenkins/minikube-integration/19373-9606/.minikube/profiles/ha-044175/client.key", CAFile:"/home/jenkins/minikube-integration/19373-9606/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(nil)},
UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1d02940), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	W0805 23:11:53.120985   28839 kubeadm.go:483] Overriding stale ClientConfig host https://192.168.39.254:8443 with https://192.168.39.57:8443
	I0805 23:11:53.121174   28839 node_ready.go:35] waiting up to 6m0s for node "ha-044175-m02" to be "Ready" ...
	I0805 23:11:53.121256   28839 round_trippers.go:463] GET https://192.168.39.57:8443/api/v1/nodes/ha-044175-m02
	I0805 23:11:53.121263   28839 round_trippers.go:469] Request Headers:
	I0805 23:11:53.121272   28839 round_trippers.go:473]     Accept: application/json, */*
	I0805 23:11:53.121275   28839 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0805 23:11:53.148497   28839 round_trippers.go:574] Response Status: 200 OK in 27 milliseconds
	I0805 23:11:53.621949   28839 round_trippers.go:463] GET https://192.168.39.57:8443/api/v1/nodes/ha-044175-m02
	I0805 23:11:53.621974   28839 round_trippers.go:469] Request Headers:
	I0805 23:11:53.621986   28839 round_trippers.go:473]     Accept: application/json, */*
	I0805 23:11:53.621992   28839 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0805 23:11:53.627851   28839 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0805 23:11:54.121485   28839 round_trippers.go:463] GET https://192.168.39.57:8443/api/v1/nodes/ha-044175-m02
	I0805 23:11:54.121505   28839 round_trippers.go:469] Request Headers:
	I0805 23:11:54.121513   28839 round_trippers.go:473]     Accept: application/json, */*
	I0805 23:11:54.121517   28839 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0805 23:11:54.125287   28839 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0805 23:11:54.621715   28839 round_trippers.go:463] GET https://192.168.39.57:8443/api/v1/nodes/ha-044175-m02
	I0805 23:11:54.621738   28839 round_trippers.go:469] Request Headers:
	I0805 23:11:54.621746   28839 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0805 23:11:54.621751   28839 round_trippers.go:473]     Accept: application/json, */*
	I0805 23:11:54.626273   28839 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0805 23:11:55.121530   28839 round_trippers.go:463] GET https://192.168.39.57:8443/api/v1/nodes/ha-044175-m02
	I0805 23:11:55.121553   28839 round_trippers.go:469] Request Headers:
	I0805 23:11:55.121562   28839 round_trippers.go:473]     Accept: application/json, */*
	I0805 23:11:55.121568   28839 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0805 23:11:55.124549   28839 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0805 23:11:55.125343   28839 node_ready.go:53] node "ha-044175-m02" has status "Ready":"False"
	I0805 23:11:55.621745   28839 round_trippers.go:463] GET https://192.168.39.57:8443/api/v1/nodes/ha-044175-m02
	I0805 23:11:55.621769   28839 round_trippers.go:469] Request Headers:
	I0805 23:11:55.621776   28839 round_trippers.go:473]     Accept: application/json, */*
	I0805 23:11:55.621779   28839 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0805 23:11:55.624905   28839 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0805 23:11:56.121489   28839 round_trippers.go:463] GET https://192.168.39.57:8443/api/v1/nodes/ha-044175-m02
	I0805 23:11:56.121509   28839 round_trippers.go:469] Request Headers:
	I0805 23:11:56.121515   28839 round_trippers.go:473]     Accept: application/json, */*
	I0805 23:11:56.121519   28839 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0805 23:11:56.124629   28839 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0805 23:11:56.622128   28839 round_trippers.go:463] GET https://192.168.39.57:8443/api/v1/nodes/ha-044175-m02
	I0805 23:11:56.622149   28839 round_trippers.go:469] Request Headers:
	I0805 23:11:56.622157   28839 round_trippers.go:473]     Accept: application/json, */*
	I0805 23:11:56.622161   28839 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0805 23:11:56.625675   28839 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0805 23:11:57.122278   28839 round_trippers.go:463] GET https://192.168.39.57:8443/api/v1/nodes/ha-044175-m02
	I0805 23:11:57.122306   28839 round_trippers.go:469] Request Headers:
	I0805 23:11:57.122316   28839 round_trippers.go:473]     Accept: application/json, */*
	I0805 23:11:57.122329   28839 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0805 23:11:57.125550   28839 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0805 23:11:57.126534   28839 node_ready.go:53] node "ha-044175-m02" has status "Ready":"False"
	I0805 23:11:57.622356   28839 round_trippers.go:463] GET https://192.168.39.57:8443/api/v1/nodes/ha-044175-m02
	I0805 23:11:57.622381   28839 round_trippers.go:469] Request Headers:
	I0805 23:11:57.622389   28839 round_trippers.go:473]     Accept: application/json, */*
	I0805 23:11:57.622392   28839 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0805 23:11:57.625634   28839 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0805 23:11:58.121512   28839 round_trippers.go:463] GET https://192.168.39.57:8443/api/v1/nodes/ha-044175-m02
	I0805 23:11:58.121533   28839 round_trippers.go:469] Request Headers:
	I0805 23:11:58.121540   28839 round_trippers.go:473]     Accept: application/json, */*
	I0805 23:11:58.121546   28839 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0805 23:11:58.125008   28839 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0805 23:11:58.622188   28839 round_trippers.go:463] GET https://192.168.39.57:8443/api/v1/nodes/ha-044175-m02
	I0805 23:11:58.622213   28839 round_trippers.go:469] Request Headers:
	I0805 23:11:58.622221   28839 round_trippers.go:473]     Accept: application/json, */*
	I0805 23:11:58.622226   28839 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0805 23:11:58.627086   28839 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0805 23:11:59.121801   28839 round_trippers.go:463] GET https://192.168.39.57:8443/api/v1/nodes/ha-044175-m02
	I0805 23:11:59.121829   28839 round_trippers.go:469] Request Headers:
	I0805 23:11:59.121837   28839 round_trippers.go:473]     Accept: application/json, */*
	I0805 23:11:59.121843   28839 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0805 23:11:59.125563   28839 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0805 23:11:59.621918   28839 round_trippers.go:463] GET https://192.168.39.57:8443/api/v1/nodes/ha-044175-m02
	I0805 23:11:59.621939   28839 round_trippers.go:469] Request Headers:
	I0805 23:11:59.621946   28839 round_trippers.go:473]     Accept: application/json, */*
	I0805 23:11:59.621951   28839 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0805 23:11:59.625132   28839 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0805 23:11:59.626012   28839 node_ready.go:53] node "ha-044175-m02" has status "Ready":"False"
	I0805 23:12:00.122129   28839 round_trippers.go:463] GET https://192.168.39.57:8443/api/v1/nodes/ha-044175-m02
	I0805 23:12:00.122149   28839 round_trippers.go:469] Request Headers:
	I0805 23:12:00.122156   28839 round_trippers.go:473]     Accept: application/json, */*
	I0805 23:12:00.122160   28839 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0805 23:12:00.126570   28839 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0805 23:12:00.621605   28839 round_trippers.go:463] GET https://192.168.39.57:8443/api/v1/nodes/ha-044175-m02
	I0805 23:12:00.621631   28839 round_trippers.go:469] Request Headers:
	I0805 23:12:00.621644   28839 round_trippers.go:473]     Accept: application/json, */*
	I0805 23:12:00.621651   28839 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0805 23:12:00.625040   28839 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0805 23:12:01.121793   28839 round_trippers.go:463] GET https://192.168.39.57:8443/api/v1/nodes/ha-044175-m02
	I0805 23:12:01.121814   28839 round_trippers.go:469] Request Headers:
	I0805 23:12:01.121822   28839 round_trippers.go:473]     Accept: application/json, */*
	I0805 23:12:01.121827   28839 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0805 23:12:01.125732   28839 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0805 23:12:01.621884   28839 round_trippers.go:463] GET https://192.168.39.57:8443/api/v1/nodes/ha-044175-m02
	I0805 23:12:01.621911   28839 round_trippers.go:469] Request Headers:
	I0805 23:12:01.621922   28839 round_trippers.go:473]     Accept: application/json, */*
	I0805 23:12:01.621929   28839 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0805 23:12:01.625225   28839 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0805 23:12:02.122259   28839 round_trippers.go:463] GET https://192.168.39.57:8443/api/v1/nodes/ha-044175-m02
	I0805 23:12:02.122280   28839 round_trippers.go:469] Request Headers:
	I0805 23:12:02.122287   28839 round_trippers.go:473]     Accept: application/json, */*
	I0805 23:12:02.122291   28839 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0805 23:12:02.125617   28839 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0805 23:12:02.126221   28839 node_ready.go:53] node "ha-044175-m02" has status "Ready":"False"
	I0805 23:12:02.621442   28839 round_trippers.go:463] GET https://192.168.39.57:8443/api/v1/nodes/ha-044175-m02
	I0805 23:12:02.621465   28839 round_trippers.go:469] Request Headers:
	I0805 23:12:02.621476   28839 round_trippers.go:473]     Accept: application/json, */*
	I0805 23:12:02.621481   28839 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0805 23:12:02.624892   28839 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0805 23:12:03.122381   28839 round_trippers.go:463] GET https://192.168.39.57:8443/api/v1/nodes/ha-044175-m02
	I0805 23:12:03.122402   28839 round_trippers.go:469] Request Headers:
	I0805 23:12:03.122408   28839 round_trippers.go:473]     Accept: application/json, */*
	I0805 23:12:03.122412   28839 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0805 23:12:03.128861   28839 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0805 23:12:03.622378   28839 round_trippers.go:463] GET https://192.168.39.57:8443/api/v1/nodes/ha-044175-m02
	I0805 23:12:03.622400   28839 round_trippers.go:469] Request Headers:
	I0805 23:12:03.622411   28839 round_trippers.go:473]     Accept: application/json, */*
	I0805 23:12:03.622415   28839 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0805 23:12:03.625999   28839 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0805 23:12:04.122150   28839 round_trippers.go:463] GET https://192.168.39.57:8443/api/v1/nodes/ha-044175-m02
	I0805 23:12:04.122176   28839 round_trippers.go:469] Request Headers:
	I0805 23:12:04.122185   28839 round_trippers.go:473]     Accept: application/json, */*
	I0805 23:12:04.122191   28839 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0805 23:12:04.125491   28839 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0805 23:12:04.126353   28839 node_ready.go:53] node "ha-044175-m02" has status "Ready":"False"
	I0805 23:12:04.622334   28839 round_trippers.go:463] GET https://192.168.39.57:8443/api/v1/nodes/ha-044175-m02
	I0805 23:12:04.622358   28839 round_trippers.go:469] Request Headers:
	I0805 23:12:04.622364   28839 round_trippers.go:473]     Accept: application/json, */*
	I0805 23:12:04.622369   28839 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0805 23:12:04.626412   28839 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0805 23:12:05.121417   28839 round_trippers.go:463] GET https://192.168.39.57:8443/api/v1/nodes/ha-044175-m02
	I0805 23:12:05.121436   28839 round_trippers.go:469] Request Headers:
	I0805 23:12:05.121443   28839 round_trippers.go:473]     Accept: application/json, */*
	I0805 23:12:05.121446   28839 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0805 23:12:05.124733   28839 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0805 23:12:05.621624   28839 round_trippers.go:463] GET https://192.168.39.57:8443/api/v1/nodes/ha-044175-m02
	I0805 23:12:05.621646   28839 round_trippers.go:469] Request Headers:
	I0805 23:12:05.621653   28839 round_trippers.go:473]     Accept: application/json, */*
	I0805 23:12:05.621658   28839 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0805 23:12:05.625121   28839 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0805 23:12:06.122214   28839 round_trippers.go:463] GET https://192.168.39.57:8443/api/v1/nodes/ha-044175-m02
	I0805 23:12:06.122240   28839 round_trippers.go:469] Request Headers:
	I0805 23:12:06.122252   28839 round_trippers.go:473]     Accept: application/json, */*
	I0805 23:12:06.122259   28839 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0805 23:12:06.125766   28839 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0805 23:12:06.621591   28839 round_trippers.go:463] GET https://192.168.39.57:8443/api/v1/nodes/ha-044175-m02
	I0805 23:12:06.621613   28839 round_trippers.go:469] Request Headers:
	I0805 23:12:06.621620   28839 round_trippers.go:473]     Accept: application/json, */*
	I0805 23:12:06.621626   28839 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0805 23:12:06.625163   28839 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0805 23:12:06.625992   28839 node_ready.go:53] node "ha-044175-m02" has status "Ready":"False"
	I0805 23:12:07.121365   28839 round_trippers.go:463] GET https://192.168.39.57:8443/api/v1/nodes/ha-044175-m02
	I0805 23:12:07.121389   28839 round_trippers.go:469] Request Headers:
	I0805 23:12:07.121399   28839 round_trippers.go:473]     Accept: application/json, */*
	I0805 23:12:07.121404   28839 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0805 23:12:07.125591   28839 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0805 23:12:07.621815   28839 round_trippers.go:463] GET https://192.168.39.57:8443/api/v1/nodes/ha-044175-m02
	I0805 23:12:07.621848   28839 round_trippers.go:469] Request Headers:
	I0805 23:12:07.621858   28839 round_trippers.go:473]     Accept: application/json, */*
	I0805 23:12:07.621862   28839 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0805 23:12:07.625062   28839 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0805 23:12:08.121368   28839 round_trippers.go:463] GET https://192.168.39.57:8443/api/v1/nodes/ha-044175-m02
	I0805 23:12:08.121402   28839 round_trippers.go:469] Request Headers:
	I0805 23:12:08.121409   28839 round_trippers.go:473]     Accept: application/json, */*
	I0805 23:12:08.121412   28839 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0805 23:12:08.124749   28839 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0805 23:12:08.622066   28839 round_trippers.go:463] GET https://192.168.39.57:8443/api/v1/nodes/ha-044175-m02
	I0805 23:12:08.622091   28839 round_trippers.go:469] Request Headers:
	I0805 23:12:08.622100   28839 round_trippers.go:473]     Accept: application/json, */*
	I0805 23:12:08.622105   28839 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0805 23:12:08.625485   28839 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0805 23:12:08.626361   28839 node_ready.go:53] node "ha-044175-m02" has status "Ready":"False"
	I0805 23:12:09.121550   28839 round_trippers.go:463] GET https://192.168.39.57:8443/api/v1/nodes/ha-044175-m02
	I0805 23:12:09.121572   28839 round_trippers.go:469] Request Headers:
	I0805 23:12:09.121580   28839 round_trippers.go:473]     Accept: application/json, */*
	I0805 23:12:09.121583   28839 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0805 23:12:09.124987   28839 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0805 23:12:09.621668   28839 round_trippers.go:463] GET https://192.168.39.57:8443/api/v1/nodes/ha-044175-m02
	I0805 23:12:09.621691   28839 round_trippers.go:469] Request Headers:
	I0805 23:12:09.621710   28839 round_trippers.go:473]     Accept: application/json, */*
	I0805 23:12:09.621715   28839 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0805 23:12:09.625294   28839 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0805 23:12:10.121590   28839 round_trippers.go:463] GET https://192.168.39.57:8443/api/v1/nodes/ha-044175-m02
	I0805 23:12:10.121624   28839 round_trippers.go:469] Request Headers:
	I0805 23:12:10.121633   28839 round_trippers.go:473]     Accept: application/json, */*
	I0805 23:12:10.121636   28839 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0805 23:12:10.126103   28839 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0805 23:12:10.621530   28839 round_trippers.go:463] GET https://192.168.39.57:8443/api/v1/nodes/ha-044175-m02
	I0805 23:12:10.621551   28839 round_trippers.go:469] Request Headers:
	I0805 23:12:10.621560   28839 round_trippers.go:473]     Accept: application/json, */*
	I0805 23:12:10.621565   28839 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0805 23:12:10.624726   28839 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0805 23:12:11.121423   28839 round_trippers.go:463] GET https://192.168.39.57:8443/api/v1/nodes/ha-044175-m02
	I0805 23:12:11.121444   28839 round_trippers.go:469] Request Headers:
	I0805 23:12:11.121452   28839 round_trippers.go:473]     Accept: application/json, */*
	I0805 23:12:11.121455   28839 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0805 23:12:11.124911   28839 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0805 23:12:11.125341   28839 node_ready.go:53] node "ha-044175-m02" has status "Ready":"False"
	I0805 23:12:11.621691   28839 round_trippers.go:463] GET https://192.168.39.57:8443/api/v1/nodes/ha-044175-m02
	I0805 23:12:11.621716   28839 round_trippers.go:469] Request Headers:
	I0805 23:12:11.621726   28839 round_trippers.go:473]     Accept: application/json, */*
	I0805 23:12:11.621731   28839 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0805 23:12:11.625061   28839 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0805 23:12:12.121996   28839 round_trippers.go:463] GET https://192.168.39.57:8443/api/v1/nodes/ha-044175-m02
	I0805 23:12:12.122028   28839 round_trippers.go:469] Request Headers:
	I0805 23:12:12.122036   28839 round_trippers.go:473]     Accept: application/json, */*
	I0805 23:12:12.122042   28839 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0805 23:12:12.125612   28839 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0805 23:12:12.126424   28839 node_ready.go:49] node "ha-044175-m02" has status "Ready":"True"
	I0805 23:12:12.126449   28839 node_ready.go:38] duration metric: took 19.00525469s for node "ha-044175-m02" to be "Ready" ...
	I0805 23:12:12.126465   28839 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0805 23:12:12.126527   28839 round_trippers.go:463] GET https://192.168.39.57:8443/api/v1/namespaces/kube-system/pods
	I0805 23:12:12.126536   28839 round_trippers.go:469] Request Headers:
	I0805 23:12:12.126543   28839 round_trippers.go:473]     Accept: application/json, */*
	I0805 23:12:12.126551   28839 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0805 23:12:12.135406   28839 round_trippers.go:574] Response Status: 200 OK in 8 milliseconds
	I0805 23:12:12.143222   28839 pod_ready.go:78] waiting up to 6m0s for pod "coredns-7db6d8ff4d-g9bml" in "kube-system" namespace to be "Ready" ...
	I0805 23:12:12.143311   28839 round_trippers.go:463] GET https://192.168.39.57:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-g9bml
	I0805 23:12:12.143319   28839 round_trippers.go:469] Request Headers:
	I0805 23:12:12.143326   28839 round_trippers.go:473]     Accept: application/json, */*
	I0805 23:12:12.143334   28839 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0805 23:12:12.146612   28839 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0805 23:12:12.147424   28839 round_trippers.go:463] GET https://192.168.39.57:8443/api/v1/nodes/ha-044175
	I0805 23:12:12.147441   28839 round_trippers.go:469] Request Headers:
	I0805 23:12:12.147449   28839 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0805 23:12:12.147454   28839 round_trippers.go:473]     Accept: application/json, */*
	I0805 23:12:12.150255   28839 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0805 23:12:12.150818   28839 pod_ready.go:92] pod "coredns-7db6d8ff4d-g9bml" in "kube-system" namespace has status "Ready":"True"
	I0805 23:12:12.150839   28839 pod_ready.go:81] duration metric: took 7.590146ms for pod "coredns-7db6d8ff4d-g9bml" in "kube-system" namespace to be "Ready" ...
	I0805 23:12:12.150848   28839 pod_ready.go:78] waiting up to 6m0s for pod "coredns-7db6d8ff4d-vzhst" in "kube-system" namespace to be "Ready" ...
	I0805 23:12:12.150942   28839 round_trippers.go:463] GET https://192.168.39.57:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-vzhst
	I0805 23:12:12.150952   28839 round_trippers.go:469] Request Headers:
	I0805 23:12:12.150959   28839 round_trippers.go:473]     Accept: application/json, */*
	I0805 23:12:12.150963   28839 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0805 23:12:12.153621   28839 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0805 23:12:12.154355   28839 round_trippers.go:463] GET https://192.168.39.57:8443/api/v1/nodes/ha-044175
	I0805 23:12:12.154370   28839 round_trippers.go:469] Request Headers:
	I0805 23:12:12.154378   28839 round_trippers.go:473]     Accept: application/json, */*
	I0805 23:12:12.154382   28839 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0805 23:12:12.156751   28839 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0805 23:12:12.157366   28839 pod_ready.go:92] pod "coredns-7db6d8ff4d-vzhst" in "kube-system" namespace has status "Ready":"True"
	I0805 23:12:12.157390   28839 pod_ready.go:81] duration metric: took 6.536219ms for pod "coredns-7db6d8ff4d-vzhst" in "kube-system" namespace to be "Ready" ...
	I0805 23:12:12.157401   28839 pod_ready.go:78] waiting up to 6m0s for pod "etcd-ha-044175" in "kube-system" namespace to be "Ready" ...
	I0805 23:12:12.157450   28839 round_trippers.go:463] GET https://192.168.39.57:8443/api/v1/namespaces/kube-system/pods/etcd-ha-044175
	I0805 23:12:12.157457   28839 round_trippers.go:469] Request Headers:
	I0805 23:12:12.157465   28839 round_trippers.go:473]     Accept: application/json, */*
	I0805 23:12:12.157468   28839 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0805 23:12:12.159895   28839 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0805 23:12:12.160437   28839 round_trippers.go:463] GET https://192.168.39.57:8443/api/v1/nodes/ha-044175
	I0805 23:12:12.160451   28839 round_trippers.go:469] Request Headers:
	I0805 23:12:12.160457   28839 round_trippers.go:473]     Accept: application/json, */*
	I0805 23:12:12.160461   28839 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0805 23:12:12.162694   28839 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0805 23:12:12.163116   28839 pod_ready.go:92] pod "etcd-ha-044175" in "kube-system" namespace has status "Ready":"True"
	I0805 23:12:12.163134   28839 pod_ready.go:81] duration metric: took 5.728191ms for pod "etcd-ha-044175" in "kube-system" namespace to be "Ready" ...
	I0805 23:12:12.163143   28839 pod_ready.go:78] waiting up to 6m0s for pod "etcd-ha-044175-m02" in "kube-system" namespace to be "Ready" ...
	I0805 23:12:12.163194   28839 round_trippers.go:463] GET https://192.168.39.57:8443/api/v1/namespaces/kube-system/pods/etcd-ha-044175-m02
	I0805 23:12:12.163203   28839 round_trippers.go:469] Request Headers:
	I0805 23:12:12.163210   28839 round_trippers.go:473]     Accept: application/json, */*
	I0805 23:12:12.163213   28839 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0805 23:12:12.166402   28839 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0805 23:12:12.167376   28839 round_trippers.go:463] GET https://192.168.39.57:8443/api/v1/nodes/ha-044175-m02
	I0805 23:12:12.167393   28839 round_trippers.go:469] Request Headers:
	I0805 23:12:12.167401   28839 round_trippers.go:473]     Accept: application/json, */*
	I0805 23:12:12.167404   28839 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0805 23:12:12.169619   28839 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0805 23:12:12.170517   28839 pod_ready.go:92] pod "etcd-ha-044175-m02" in "kube-system" namespace has status "Ready":"True"
	I0805 23:12:12.170534   28839 pod_ready.go:81] duration metric: took 7.385716ms for pod "etcd-ha-044175-m02" in "kube-system" namespace to be "Ready" ...
	I0805 23:12:12.170547   28839 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-ha-044175" in "kube-system" namespace to be "Ready" ...
	I0805 23:12:12.322937   28839 request.go:629] Waited for 152.336703ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.57:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-044175
	I0805 23:12:12.323005   28839 round_trippers.go:463] GET https://192.168.39.57:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-044175
	I0805 23:12:12.323012   28839 round_trippers.go:469] Request Headers:
	I0805 23:12:12.323021   28839 round_trippers.go:473]     Accept: application/json, */*
	I0805 23:12:12.323027   28839 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0805 23:12:12.326531   28839 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0805 23:12:12.522848   28839 request.go:629] Waited for 195.379036ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.57:8443/api/v1/nodes/ha-044175
	I0805 23:12:12.522933   28839 round_trippers.go:463] GET https://192.168.39.57:8443/api/v1/nodes/ha-044175
	I0805 23:12:12.522940   28839 round_trippers.go:469] Request Headers:
	I0805 23:12:12.522947   28839 round_trippers.go:473]     Accept: application/json, */*
	I0805 23:12:12.522951   28839 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0805 23:12:12.526139   28839 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0805 23:12:12.526660   28839 pod_ready.go:92] pod "kube-apiserver-ha-044175" in "kube-system" namespace has status "Ready":"True"
	I0805 23:12:12.526677   28839 pod_ready.go:81] duration metric: took 356.124671ms for pod "kube-apiserver-ha-044175" in "kube-system" namespace to be "Ready" ...
	I0805 23:12:12.526687   28839 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-ha-044175-m02" in "kube-system" namespace to be "Ready" ...
	I0805 23:12:12.722910   28839 request.go:629] Waited for 196.160326ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.57:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-044175-m02
	I0805 23:12:12.723002   28839 round_trippers.go:463] GET https://192.168.39.57:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-044175-m02
	I0805 23:12:12.723010   28839 round_trippers.go:469] Request Headers:
	I0805 23:12:12.723018   28839 round_trippers.go:473]     Accept: application/json, */*
	I0805 23:12:12.723028   28839 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0805 23:12:12.726207   28839 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0805 23:12:12.922332   28839 request.go:629] Waited for 195.350633ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.57:8443/api/v1/nodes/ha-044175-m02
	I0805 23:12:12.922388   28839 round_trippers.go:463] GET https://192.168.39.57:8443/api/v1/nodes/ha-044175-m02
	I0805 23:12:12.922393   28839 round_trippers.go:469] Request Headers:
	I0805 23:12:12.922400   28839 round_trippers.go:473]     Accept: application/json, */*
	I0805 23:12:12.922404   28839 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0805 23:12:12.925742   28839 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0805 23:12:12.926446   28839 pod_ready.go:92] pod "kube-apiserver-ha-044175-m02" in "kube-system" namespace has status "Ready":"True"
	I0805 23:12:12.926465   28839 pod_ready.go:81] duration metric: took 399.771524ms for pod "kube-apiserver-ha-044175-m02" in "kube-system" namespace to be "Ready" ...
	I0805 23:12:12.926475   28839 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-ha-044175" in "kube-system" namespace to be "Ready" ...
	I0805 23:12:13.122661   28839 request.go:629] Waited for 196.12267ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.57:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-044175
	I0805 23:12:13.122738   28839 round_trippers.go:463] GET https://192.168.39.57:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-044175
	I0805 23:12:13.122743   28839 round_trippers.go:469] Request Headers:
	I0805 23:12:13.122751   28839 round_trippers.go:473]     Accept: application/json, */*
	I0805 23:12:13.122756   28839 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0805 23:12:13.125878   28839 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0805 23:12:13.322755   28839 request.go:629] Waited for 196.363874ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.57:8443/api/v1/nodes/ha-044175
	I0805 23:12:13.322812   28839 round_trippers.go:463] GET https://192.168.39.57:8443/api/v1/nodes/ha-044175
	I0805 23:12:13.322817   28839 round_trippers.go:469] Request Headers:
	I0805 23:12:13.322825   28839 round_trippers.go:473]     Accept: application/json, */*
	I0805 23:12:13.322836   28839 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0805 23:12:13.326871   28839 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0805 23:12:13.327724   28839 pod_ready.go:92] pod "kube-controller-manager-ha-044175" in "kube-system" namespace has status "Ready":"True"
	I0805 23:12:13.327746   28839 pod_ready.go:81] duration metric: took 401.265029ms for pod "kube-controller-manager-ha-044175" in "kube-system" namespace to be "Ready" ...
	I0805 23:12:13.327757   28839 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-ha-044175-m02" in "kube-system" namespace to be "Ready" ...
	I0805 23:12:13.522071   28839 request.go:629] Waited for 194.256831ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.57:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-044175-m02
	I0805 23:12:13.522134   28839 round_trippers.go:463] GET https://192.168.39.57:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-044175-m02
	I0805 23:12:13.522139   28839 round_trippers.go:469] Request Headers:
	I0805 23:12:13.522158   28839 round_trippers.go:473]     Accept: application/json, */*
	I0805 23:12:13.522162   28839 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0805 23:12:13.526485   28839 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0805 23:12:13.722163   28839 request.go:629] Waited for 194.278089ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.57:8443/api/v1/nodes/ha-044175-m02
	I0805 23:12:13.722215   28839 round_trippers.go:463] GET https://192.168.39.57:8443/api/v1/nodes/ha-044175-m02
	I0805 23:12:13.722220   28839 round_trippers.go:469] Request Headers:
	I0805 23:12:13.722228   28839 round_trippers.go:473]     Accept: application/json, */*
	I0805 23:12:13.722231   28839 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0805 23:12:13.725186   28839 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0805 23:12:13.725923   28839 pod_ready.go:92] pod "kube-controller-manager-ha-044175-m02" in "kube-system" namespace has status "Ready":"True"
	I0805 23:12:13.725941   28839 pod_ready.go:81] duration metric: took 398.177359ms for pod "kube-controller-manager-ha-044175-m02" in "kube-system" namespace to be "Ready" ...
	I0805 23:12:13.725952   28839 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-jfs9q" in "kube-system" namespace to be "Ready" ...
	I0805 23:12:13.923099   28839 request.go:629] Waited for 197.047899ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.57:8443/api/v1/namespaces/kube-system/pods/kube-proxy-jfs9q
	I0805 23:12:13.923162   28839 round_trippers.go:463] GET https://192.168.39.57:8443/api/v1/namespaces/kube-system/pods/kube-proxy-jfs9q
	I0805 23:12:13.923167   28839 round_trippers.go:469] Request Headers:
	I0805 23:12:13.923175   28839 round_trippers.go:473]     Accept: application/json, */*
	I0805 23:12:13.923180   28839 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0805 23:12:13.926625   28839 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0805 23:12:14.122760   28839 request.go:629] Waited for 195.388811ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.57:8443/api/v1/nodes/ha-044175-m02
	I0805 23:12:14.122819   28839 round_trippers.go:463] GET https://192.168.39.57:8443/api/v1/nodes/ha-044175-m02
	I0805 23:12:14.122825   28839 round_trippers.go:469] Request Headers:
	I0805 23:12:14.122833   28839 round_trippers.go:473]     Accept: application/json, */*
	I0805 23:12:14.122837   28839 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0805 23:12:14.126522   28839 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0805 23:12:14.127848   28839 pod_ready.go:92] pod "kube-proxy-jfs9q" in "kube-system" namespace has status "Ready":"True"
	I0805 23:12:14.127874   28839 pod_ready.go:81] duration metric: took 401.91347ms for pod "kube-proxy-jfs9q" in "kube-system" namespace to be "Ready" ...
	I0805 23:12:14.127887   28839 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-vj5sd" in "kube-system" namespace to be "Ready" ...
	I0805 23:12:14.322970   28839 request.go:629] Waited for 194.988509ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.57:8443/api/v1/namespaces/kube-system/pods/kube-proxy-vj5sd
	I0805 23:12:14.323029   28839 round_trippers.go:463] GET https://192.168.39.57:8443/api/v1/namespaces/kube-system/pods/kube-proxy-vj5sd
	I0805 23:12:14.323035   28839 round_trippers.go:469] Request Headers:
	I0805 23:12:14.323042   28839 round_trippers.go:473]     Accept: application/json, */*
	I0805 23:12:14.323046   28839 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0805 23:12:14.326746   28839 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0805 23:12:14.522119   28839 request.go:629] Waited for 194.304338ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.57:8443/api/v1/nodes/ha-044175
	I0805 23:12:14.522179   28839 round_trippers.go:463] GET https://192.168.39.57:8443/api/v1/nodes/ha-044175
	I0805 23:12:14.522192   28839 round_trippers.go:469] Request Headers:
	I0805 23:12:14.522214   28839 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0805 23:12:14.522220   28839 round_trippers.go:473]     Accept: application/json, */*
	I0805 23:12:14.528370   28839 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0805 23:12:14.529018   28839 pod_ready.go:92] pod "kube-proxy-vj5sd" in "kube-system" namespace has status "Ready":"True"
	I0805 23:12:14.529040   28839 pod_ready.go:81] duration metric: took 401.145004ms for pod "kube-proxy-vj5sd" in "kube-system" namespace to be "Ready" ...
	I0805 23:12:14.529049   28839 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-ha-044175" in "kube-system" namespace to be "Ready" ...
	I0805 23:12:14.722709   28839 request.go:629] Waited for 193.590518ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.57:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-044175
	I0805 23:12:14.722779   28839 round_trippers.go:463] GET https://192.168.39.57:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-044175
	I0805 23:12:14.722788   28839 round_trippers.go:469] Request Headers:
	I0805 23:12:14.722798   28839 round_trippers.go:473]     Accept: application/json, */*
	I0805 23:12:14.722804   28839 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0805 23:12:14.727234   28839 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0805 23:12:14.922371   28839 request.go:629] Waited for 194.386722ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.57:8443/api/v1/nodes/ha-044175
	I0805 23:12:14.922432   28839 round_trippers.go:463] GET https://192.168.39.57:8443/api/v1/nodes/ha-044175
	I0805 23:12:14.922439   28839 round_trippers.go:469] Request Headers:
	I0805 23:12:14.922448   28839 round_trippers.go:473]     Accept: application/json, */*
	I0805 23:12:14.922453   28839 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0805 23:12:14.926046   28839 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0805 23:12:14.926586   28839 pod_ready.go:92] pod "kube-scheduler-ha-044175" in "kube-system" namespace has status "Ready":"True"
	I0805 23:12:14.926604   28839 pod_ready.go:81] duration metric: took 397.548315ms for pod "kube-scheduler-ha-044175" in "kube-system" namespace to be "Ready" ...
	I0805 23:12:14.926613   28839 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-ha-044175-m02" in "kube-system" namespace to be "Ready" ...
	I0805 23:12:15.122823   28839 request.go:629] Waited for 196.138859ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.57:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-044175-m02
	I0805 23:12:15.122879   28839 round_trippers.go:463] GET https://192.168.39.57:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-044175-m02
	I0805 23:12:15.122885   28839 round_trippers.go:469] Request Headers:
	I0805 23:12:15.122895   28839 round_trippers.go:473]     Accept: application/json, */*
	I0805 23:12:15.122903   28839 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0805 23:12:15.126637   28839 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0805 23:12:15.322528   28839 request.go:629] Waited for 194.388628ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.57:8443/api/v1/nodes/ha-044175-m02
	I0805 23:12:15.322589   28839 round_trippers.go:463] GET https://192.168.39.57:8443/api/v1/nodes/ha-044175-m02
	I0805 23:12:15.322594   28839 round_trippers.go:469] Request Headers:
	I0805 23:12:15.322601   28839 round_trippers.go:473]     Accept: application/json, */*
	I0805 23:12:15.322605   28839 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0805 23:12:15.325830   28839 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0805 23:12:15.326484   28839 pod_ready.go:92] pod "kube-scheduler-ha-044175-m02" in "kube-system" namespace has status "Ready":"True"
	I0805 23:12:15.326500   28839 pod_ready.go:81] duration metric: took 399.881115ms for pod "kube-scheduler-ha-044175-m02" in "kube-system" namespace to be "Ready" ...
	I0805 23:12:15.326513   28839 pod_ready.go:38] duration metric: took 3.200030463s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0805 23:12:15.326536   28839 api_server.go:52] waiting for apiserver process to appear ...
	I0805 23:12:15.326592   28839 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0805 23:12:15.343544   28839 api_server.go:72] duration metric: took 22.543885874s to wait for apiserver process to appear ...
	I0805 23:12:15.343576   28839 api_server.go:88] waiting for apiserver healthz status ...
	I0805 23:12:15.343604   28839 api_server.go:253] Checking apiserver healthz at https://192.168.39.57:8443/healthz ...
	I0805 23:12:15.348183   28839 api_server.go:279] https://192.168.39.57:8443/healthz returned 200:
	ok
	I0805 23:12:15.348281   28839 round_trippers.go:463] GET https://192.168.39.57:8443/version
	I0805 23:12:15.348293   28839 round_trippers.go:469] Request Headers:
	I0805 23:12:15.348301   28839 round_trippers.go:473]     Accept: application/json, */*
	I0805 23:12:15.348305   28839 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0805 23:12:15.349226   28839 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0805 23:12:15.349349   28839 api_server.go:141] control plane version: v1.30.3
	I0805 23:12:15.349368   28839 api_server.go:131] duration metric: took 5.784906ms to wait for apiserver health ...
	I0805 23:12:15.349383   28839 system_pods.go:43] waiting for kube-system pods to appear ...
	I0805 23:12:15.522875   28839 request.go:629] Waited for 173.4123ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.57:8443/api/v1/namespaces/kube-system/pods
	I0805 23:12:15.522929   28839 round_trippers.go:463] GET https://192.168.39.57:8443/api/v1/namespaces/kube-system/pods
	I0805 23:12:15.522934   28839 round_trippers.go:469] Request Headers:
	I0805 23:12:15.522942   28839 round_trippers.go:473]     Accept: application/json, */*
	I0805 23:12:15.522946   28839 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0805 23:12:15.528927   28839 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0805 23:12:15.534765   28839 system_pods.go:59] 17 kube-system pods found
	I0805 23:12:15.534810   28839 system_pods.go:61] "coredns-7db6d8ff4d-g9bml" [fd474413-e416-48db-a7bf-f3c40675819b] Running
	I0805 23:12:15.534817   28839 system_pods.go:61] "coredns-7db6d8ff4d-vzhst" [f9c09745-be29-4403-9e7d-f9e4eaae5cac] Running
	I0805 23:12:15.534821   28839 system_pods.go:61] "etcd-ha-044175" [f9008d52-5a0c-4a6b-9cdf-7df18dd78752] Running
	I0805 23:12:15.534824   28839 system_pods.go:61] "etcd-ha-044175-m02" [773f42be-f8b5-47f0-bcd0-36bd6ae24bab] Running
	I0805 23:12:15.534828   28839 system_pods.go:61] "kindnet-hqhgc" [de6b28dc-79ea-43af-868e-e32180dcd5f2] Running
	I0805 23:12:15.534833   28839 system_pods.go:61] "kindnet-xqx4z" [8455705e-b140-4f1e-abff-6a71bbb5415f] Running
	I0805 23:12:15.534838   28839 system_pods.go:61] "kube-apiserver-ha-044175" [4e39654d-531d-4cf4-b4a9-beeada8e8d05] Running
	I0805 23:12:15.534842   28839 system_pods.go:61] "kube-apiserver-ha-044175-m02" [06dfad00-f627-43cd-abea-c3a34d423964] Running
	I0805 23:12:15.534847   28839 system_pods.go:61] "kube-controller-manager-ha-044175" [d6f6d163-103f-4af4-976f-c255d1933bb2] Running
	I0805 23:12:15.534855   28839 system_pods.go:61] "kube-controller-manager-ha-044175-m02" [1bf050d3-1969-4ca1-89d3-f729989fd6b8] Running
	I0805 23:12:15.534864   28839 system_pods.go:61] "kube-proxy-jfs9q" [d8d0b4df-e1e1-4354-ba55-594dec7d1e89] Running
	I0805 23:12:15.534868   28839 system_pods.go:61] "kube-proxy-vj5sd" [d6c9cdcb-e1b7-44c8-a6e3-5e5aeb76ba03] Running
	I0805 23:12:15.534872   28839 system_pods.go:61] "kube-scheduler-ha-044175" [41c96a32-1b26-4e05-a21a-48c4fd913b9f] Running
	I0805 23:12:15.534878   28839 system_pods.go:61] "kube-scheduler-ha-044175-m02" [8e41f86c-0b86-40be-a524-fbae6283693d] Running
	I0805 23:12:15.534881   28839 system_pods.go:61] "kube-vip-ha-044175" [505ff885-b8a0-48bd-8d1e-81e4583b48af] Running
	I0805 23:12:15.534884   28839 system_pods.go:61] "kube-vip-ha-044175-m02" [ffbecaef-6482-4c4e-8268-4b66e4799be5] Running
	I0805 23:12:15.534888   28839 system_pods.go:61] "storage-provisioner" [d30d1a5b-cfbe-4de6-a964-75c32e5dbf62] Running
	I0805 23:12:15.534893   28839 system_pods.go:74] duration metric: took 185.501567ms to wait for pod list to return data ...
	I0805 23:12:15.534904   28839 default_sa.go:34] waiting for default service account to be created ...
	I0805 23:12:15.722680   28839 request.go:629] Waited for 187.701592ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.57:8443/api/v1/namespaces/default/serviceaccounts
	I0805 23:12:15.722770   28839 round_trippers.go:463] GET https://192.168.39.57:8443/api/v1/namespaces/default/serviceaccounts
	I0805 23:12:15.722782   28839 round_trippers.go:469] Request Headers:
	I0805 23:12:15.722792   28839 round_trippers.go:473]     Accept: application/json, */*
	I0805 23:12:15.722800   28839 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0805 23:12:15.726559   28839 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0805 23:12:15.726832   28839 default_sa.go:45] found service account: "default"
	I0805 23:12:15.726852   28839 default_sa.go:55] duration metric: took 191.941352ms for default service account to be created ...
	I0805 23:12:15.726863   28839 system_pods.go:116] waiting for k8s-apps to be running ...
	I0805 23:12:15.922594   28839 request.go:629] Waited for 195.648365ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.57:8443/api/v1/namespaces/kube-system/pods
	I0805 23:12:15.922662   28839 round_trippers.go:463] GET https://192.168.39.57:8443/api/v1/namespaces/kube-system/pods
	I0805 23:12:15.922669   28839 round_trippers.go:469] Request Headers:
	I0805 23:12:15.922679   28839 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0805 23:12:15.922684   28839 round_trippers.go:473]     Accept: application/json, */*
	I0805 23:12:15.928553   28839 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0805 23:12:15.933038   28839 system_pods.go:86] 17 kube-system pods found
	I0805 23:12:15.933072   28839 system_pods.go:89] "coredns-7db6d8ff4d-g9bml" [fd474413-e416-48db-a7bf-f3c40675819b] Running
	I0805 23:12:15.933081   28839 system_pods.go:89] "coredns-7db6d8ff4d-vzhst" [f9c09745-be29-4403-9e7d-f9e4eaae5cac] Running
	I0805 23:12:15.933089   28839 system_pods.go:89] "etcd-ha-044175" [f9008d52-5a0c-4a6b-9cdf-7df18dd78752] Running
	I0805 23:12:15.933096   28839 system_pods.go:89] "etcd-ha-044175-m02" [773f42be-f8b5-47f0-bcd0-36bd6ae24bab] Running
	I0805 23:12:15.933102   28839 system_pods.go:89] "kindnet-hqhgc" [de6b28dc-79ea-43af-868e-e32180dcd5f2] Running
	I0805 23:12:15.933109   28839 system_pods.go:89] "kindnet-xqx4z" [8455705e-b140-4f1e-abff-6a71bbb5415f] Running
	I0805 23:12:15.933116   28839 system_pods.go:89] "kube-apiserver-ha-044175" [4e39654d-531d-4cf4-b4a9-beeada8e8d05] Running
	I0805 23:12:15.933123   28839 system_pods.go:89] "kube-apiserver-ha-044175-m02" [06dfad00-f627-43cd-abea-c3a34d423964] Running
	I0805 23:12:15.933131   28839 system_pods.go:89] "kube-controller-manager-ha-044175" [d6f6d163-103f-4af4-976f-c255d1933bb2] Running
	I0805 23:12:15.933142   28839 system_pods.go:89] "kube-controller-manager-ha-044175-m02" [1bf050d3-1969-4ca1-89d3-f729989fd6b8] Running
	I0805 23:12:15.933153   28839 system_pods.go:89] "kube-proxy-jfs9q" [d8d0b4df-e1e1-4354-ba55-594dec7d1e89] Running
	I0805 23:12:15.933161   28839 system_pods.go:89] "kube-proxy-vj5sd" [d6c9cdcb-e1b7-44c8-a6e3-5e5aeb76ba03] Running
	I0805 23:12:15.933169   28839 system_pods.go:89] "kube-scheduler-ha-044175" [41c96a32-1b26-4e05-a21a-48c4fd913b9f] Running
	I0805 23:12:15.933177   28839 system_pods.go:89] "kube-scheduler-ha-044175-m02" [8e41f86c-0b86-40be-a524-fbae6283693d] Running
	I0805 23:12:15.933185   28839 system_pods.go:89] "kube-vip-ha-044175" [505ff885-b8a0-48bd-8d1e-81e4583b48af] Running
	I0805 23:12:15.933192   28839 system_pods.go:89] "kube-vip-ha-044175-m02" [ffbecaef-6482-4c4e-8268-4b66e4799be5] Running
	I0805 23:12:15.933201   28839 system_pods.go:89] "storage-provisioner" [d30d1a5b-cfbe-4de6-a964-75c32e5dbf62] Running
	I0805 23:12:15.933214   28839 system_pods.go:126] duration metric: took 206.344214ms to wait for k8s-apps to be running ...
	I0805 23:12:15.933225   28839 system_svc.go:44] waiting for kubelet service to be running ....
	I0805 23:12:15.933286   28839 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0805 23:12:15.951297   28839 system_svc.go:56] duration metric: took 18.065984ms WaitForService to wait for kubelet
	I0805 23:12:15.951329   28839 kubeadm.go:582] duration metric: took 23.151674816s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0805 23:12:15.951350   28839 node_conditions.go:102] verifying NodePressure condition ...
	I0805 23:12:16.122799   28839 request.go:629] Waited for 171.37013ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.57:8443/api/v1/nodes
	I0805 23:12:16.122865   28839 round_trippers.go:463] GET https://192.168.39.57:8443/api/v1/nodes
	I0805 23:12:16.122880   28839 round_trippers.go:469] Request Headers:
	I0805 23:12:16.122891   28839 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0805 23:12:16.122901   28839 round_trippers.go:473]     Accept: application/json, */*
	I0805 23:12:16.131431   28839 round_trippers.go:574] Response Status: 200 OK in 8 milliseconds
	I0805 23:12:16.132476   28839 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0805 23:12:16.132503   28839 node_conditions.go:123] node cpu capacity is 2
	I0805 23:12:16.132523   28839 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0805 23:12:16.132527   28839 node_conditions.go:123] node cpu capacity is 2
	I0805 23:12:16.132531   28839 node_conditions.go:105] duration metric: took 181.176198ms to run NodePressure ...
	I0805 23:12:16.132544   28839 start.go:241] waiting for startup goroutines ...
	I0805 23:12:16.132575   28839 start.go:255] writing updated cluster config ...
	I0805 23:12:16.135079   28839 out.go:177] 
	I0805 23:12:16.136635   28839 config.go:182] Loaded profile config "ha-044175": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0805 23:12:16.136721   28839 profile.go:143] Saving config to /home/jenkins/minikube-integration/19373-9606/.minikube/profiles/ha-044175/config.json ...
	I0805 23:12:16.138404   28839 out.go:177] * Starting "ha-044175-m03" control-plane node in "ha-044175" cluster
	I0805 23:12:16.139831   28839 preload.go:131] Checking if preload exists for k8s version v1.30.3 and runtime crio
	I0805 23:12:16.139854   28839 cache.go:56] Caching tarball of preloaded images
	I0805 23:12:16.139981   28839 preload.go:172] Found /home/jenkins/minikube-integration/19373-9606/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0805 23:12:16.140001   28839 cache.go:59] Finished verifying existence of preloaded tar for v1.30.3 on crio
	I0805 23:12:16.140108   28839 profile.go:143] Saving config to /home/jenkins/minikube-integration/19373-9606/.minikube/profiles/ha-044175/config.json ...
	I0805 23:12:16.140337   28839 start.go:360] acquireMachinesLock for ha-044175-m03: {Name:mkd2ba511c39504598222edbf83078b718329186 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0805 23:12:16.140400   28839 start.go:364] duration metric: took 35.222µs to acquireMachinesLock for "ha-044175-m03"
	I0805 23:12:16.140420   28839 start.go:93] Provisioning new machine with config: &{Name:ha-044175 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19339/minikube-v1.33.1-1722248113-19339-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kubernete
sVersion:v1.30.3 ClusterName:ha-044175 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.57 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.112 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m03 IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-d
ns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror:
DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name:m03 IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0805 23:12:16.140537   28839 start.go:125] createHost starting for "m03" (driver="kvm2")
	I0805 23:12:16.142457   28839 out.go:204] * Creating kvm2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0805 23:12:16.142624   28839 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0805 23:12:16.142673   28839 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0805 23:12:16.158944   28839 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40793
	I0805 23:12:16.159390   28839 main.go:141] libmachine: () Calling .GetVersion
	I0805 23:12:16.159849   28839 main.go:141] libmachine: Using API Version  1
	I0805 23:12:16.159866   28839 main.go:141] libmachine: () Calling .SetConfigRaw
	I0805 23:12:16.160215   28839 main.go:141] libmachine: () Calling .GetMachineName
	I0805 23:12:16.160411   28839 main.go:141] libmachine: (ha-044175-m03) Calling .GetMachineName
	I0805 23:12:16.160572   28839 main.go:141] libmachine: (ha-044175-m03) Calling .DriverName
	I0805 23:12:16.160737   28839 start.go:159] libmachine.API.Create for "ha-044175" (driver="kvm2")
	I0805 23:12:16.160771   28839 client.go:168] LocalClient.Create starting
	I0805 23:12:16.160810   28839 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/19373-9606/.minikube/certs/ca.pem
	I0805 23:12:16.160850   28839 main.go:141] libmachine: Decoding PEM data...
	I0805 23:12:16.160868   28839 main.go:141] libmachine: Parsing certificate...
	I0805 23:12:16.160921   28839 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/19373-9606/.minikube/certs/cert.pem
	I0805 23:12:16.160944   28839 main.go:141] libmachine: Decoding PEM data...
	I0805 23:12:16.160959   28839 main.go:141] libmachine: Parsing certificate...
	I0805 23:12:16.160978   28839 main.go:141] libmachine: Running pre-create checks...
	I0805 23:12:16.160993   28839 main.go:141] libmachine: (ha-044175-m03) Calling .PreCreateCheck
	I0805 23:12:16.161166   28839 main.go:141] libmachine: (ha-044175-m03) Calling .GetConfigRaw
	I0805 23:12:16.161570   28839 main.go:141] libmachine: Creating machine...
	I0805 23:12:16.161583   28839 main.go:141] libmachine: (ha-044175-m03) Calling .Create
	I0805 23:12:16.161702   28839 main.go:141] libmachine: (ha-044175-m03) Creating KVM machine...
	I0805 23:12:16.163133   28839 main.go:141] libmachine: (ha-044175-m03) DBG | found existing default KVM network
	I0805 23:12:16.163285   28839 main.go:141] libmachine: (ha-044175-m03) DBG | found existing private KVM network mk-ha-044175
	I0805 23:12:16.163415   28839 main.go:141] libmachine: (ha-044175-m03) Setting up store path in /home/jenkins/minikube-integration/19373-9606/.minikube/machines/ha-044175-m03 ...
	I0805 23:12:16.163439   28839 main.go:141] libmachine: (ha-044175-m03) Building disk image from file:///home/jenkins/minikube-integration/19373-9606/.minikube/cache/iso/amd64/minikube-v1.33.1-1722248113-19339-amd64.iso
	I0805 23:12:16.163516   28839 main.go:141] libmachine: (ha-044175-m03) DBG | I0805 23:12:16.163402   29643 common.go:145] Making disk image using store path: /home/jenkins/minikube-integration/19373-9606/.minikube
	I0805 23:12:16.163576   28839 main.go:141] libmachine: (ha-044175-m03) Downloading /home/jenkins/minikube-integration/19373-9606/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/19373-9606/.minikube/cache/iso/amd64/minikube-v1.33.1-1722248113-19339-amd64.iso...
	I0805 23:12:16.391616   28839 main.go:141] libmachine: (ha-044175-m03) DBG | I0805 23:12:16.391460   29643 common.go:152] Creating ssh key: /home/jenkins/minikube-integration/19373-9606/.minikube/machines/ha-044175-m03/id_rsa...
	I0805 23:12:16.494948   28839 main.go:141] libmachine: (ha-044175-m03) DBG | I0805 23:12:16.494820   29643 common.go:158] Creating raw disk image: /home/jenkins/minikube-integration/19373-9606/.minikube/machines/ha-044175-m03/ha-044175-m03.rawdisk...
	I0805 23:12:16.494984   28839 main.go:141] libmachine: (ha-044175-m03) DBG | Writing magic tar header
	I0805 23:12:16.494998   28839 main.go:141] libmachine: (ha-044175-m03) DBG | Writing SSH key tar header
	I0805 23:12:16.495009   28839 main.go:141] libmachine: (ha-044175-m03) DBG | I0805 23:12:16.494927   29643 common.go:172] Fixing permissions on /home/jenkins/minikube-integration/19373-9606/.minikube/machines/ha-044175-m03 ...
	I0805 23:12:16.495025   28839 main.go:141] libmachine: (ha-044175-m03) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19373-9606/.minikube/machines/ha-044175-m03
	I0805 23:12:16.495073   28839 main.go:141] libmachine: (ha-044175-m03) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19373-9606/.minikube/machines
	I0805 23:12:16.495090   28839 main.go:141] libmachine: (ha-044175-m03) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19373-9606/.minikube
	I0805 23:12:16.495102   28839 main.go:141] libmachine: (ha-044175-m03) Setting executable bit set on /home/jenkins/minikube-integration/19373-9606/.minikube/machines/ha-044175-m03 (perms=drwx------)
	I0805 23:12:16.495136   28839 main.go:141] libmachine: (ha-044175-m03) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19373-9606
	I0805 23:12:16.495158   28839 main.go:141] libmachine: (ha-044175-m03) DBG | Checking permissions on dir: /home/jenkins/minikube-integration
	I0805 23:12:16.495172   28839 main.go:141] libmachine: (ha-044175-m03) Setting executable bit set on /home/jenkins/minikube-integration/19373-9606/.minikube/machines (perms=drwxr-xr-x)
	I0805 23:12:16.495197   28839 main.go:141] libmachine: (ha-044175-m03) Setting executable bit set on /home/jenkins/minikube-integration/19373-9606/.minikube (perms=drwxr-xr-x)
	I0805 23:12:16.495212   28839 main.go:141] libmachine: (ha-044175-m03) Setting executable bit set on /home/jenkins/minikube-integration/19373-9606 (perms=drwxrwxr-x)
	I0805 23:12:16.495227   28839 main.go:141] libmachine: (ha-044175-m03) Setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I0805 23:12:16.495242   28839 main.go:141] libmachine: (ha-044175-m03) Setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I0805 23:12:16.495255   28839 main.go:141] libmachine: (ha-044175-m03) DBG | Checking permissions on dir: /home/jenkins
	I0805 23:12:16.495266   28839 main.go:141] libmachine: (ha-044175-m03) Creating domain...
	I0805 23:12:16.495279   28839 main.go:141] libmachine: (ha-044175-m03) DBG | Checking permissions on dir: /home
	I0805 23:12:16.495296   28839 main.go:141] libmachine: (ha-044175-m03) DBG | Skipping /home - not owner
	I0805 23:12:16.496167   28839 main.go:141] libmachine: (ha-044175-m03) define libvirt domain using xml: 
	I0805 23:12:16.496204   28839 main.go:141] libmachine: (ha-044175-m03) <domain type='kvm'>
	I0805 23:12:16.496219   28839 main.go:141] libmachine: (ha-044175-m03)   <name>ha-044175-m03</name>
	I0805 23:12:16.496232   28839 main.go:141] libmachine: (ha-044175-m03)   <memory unit='MiB'>2200</memory>
	I0805 23:12:16.496243   28839 main.go:141] libmachine: (ha-044175-m03)   <vcpu>2</vcpu>
	I0805 23:12:16.496253   28839 main.go:141] libmachine: (ha-044175-m03)   <features>
	I0805 23:12:16.496262   28839 main.go:141] libmachine: (ha-044175-m03)     <acpi/>
	I0805 23:12:16.496276   28839 main.go:141] libmachine: (ha-044175-m03)     <apic/>
	I0805 23:12:16.496288   28839 main.go:141] libmachine: (ha-044175-m03)     <pae/>
	I0805 23:12:16.496297   28839 main.go:141] libmachine: (ha-044175-m03)     
	I0805 23:12:16.496307   28839 main.go:141] libmachine: (ha-044175-m03)   </features>
	I0805 23:12:16.496316   28839 main.go:141] libmachine: (ha-044175-m03)   <cpu mode='host-passthrough'>
	I0805 23:12:16.496327   28839 main.go:141] libmachine: (ha-044175-m03)   
	I0805 23:12:16.496335   28839 main.go:141] libmachine: (ha-044175-m03)   </cpu>
	I0805 23:12:16.496366   28839 main.go:141] libmachine: (ha-044175-m03)   <os>
	I0805 23:12:16.496387   28839 main.go:141] libmachine: (ha-044175-m03)     <type>hvm</type>
	I0805 23:12:16.496411   28839 main.go:141] libmachine: (ha-044175-m03)     <boot dev='cdrom'/>
	I0805 23:12:16.496427   28839 main.go:141] libmachine: (ha-044175-m03)     <boot dev='hd'/>
	I0805 23:12:16.496443   28839 main.go:141] libmachine: (ha-044175-m03)     <bootmenu enable='no'/>
	I0805 23:12:16.496459   28839 main.go:141] libmachine: (ha-044175-m03)   </os>
	I0805 23:12:16.496471   28839 main.go:141] libmachine: (ha-044175-m03)   <devices>
	I0805 23:12:16.496482   28839 main.go:141] libmachine: (ha-044175-m03)     <disk type='file' device='cdrom'>
	I0805 23:12:16.496497   28839 main.go:141] libmachine: (ha-044175-m03)       <source file='/home/jenkins/minikube-integration/19373-9606/.minikube/machines/ha-044175-m03/boot2docker.iso'/>
	I0805 23:12:16.496510   28839 main.go:141] libmachine: (ha-044175-m03)       <target dev='hdc' bus='scsi'/>
	I0805 23:12:16.496523   28839 main.go:141] libmachine: (ha-044175-m03)       <readonly/>
	I0805 23:12:16.496537   28839 main.go:141] libmachine: (ha-044175-m03)     </disk>
	I0805 23:12:16.496551   28839 main.go:141] libmachine: (ha-044175-m03)     <disk type='file' device='disk'>
	I0805 23:12:16.496566   28839 main.go:141] libmachine: (ha-044175-m03)       <driver name='qemu' type='raw' cache='default' io='threads' />
	I0805 23:12:16.496582   28839 main.go:141] libmachine: (ha-044175-m03)       <source file='/home/jenkins/minikube-integration/19373-9606/.minikube/machines/ha-044175-m03/ha-044175-m03.rawdisk'/>
	I0805 23:12:16.496593   28839 main.go:141] libmachine: (ha-044175-m03)       <target dev='hda' bus='virtio'/>
	I0805 23:12:16.496607   28839 main.go:141] libmachine: (ha-044175-m03)     </disk>
	I0805 23:12:16.496624   28839 main.go:141] libmachine: (ha-044175-m03)     <interface type='network'>
	I0805 23:12:16.496639   28839 main.go:141] libmachine: (ha-044175-m03)       <source network='mk-ha-044175'/>
	I0805 23:12:16.496649   28839 main.go:141] libmachine: (ha-044175-m03)       <model type='virtio'/>
	I0805 23:12:16.496659   28839 main.go:141] libmachine: (ha-044175-m03)     </interface>
	I0805 23:12:16.496667   28839 main.go:141] libmachine: (ha-044175-m03)     <interface type='network'>
	I0805 23:12:16.496673   28839 main.go:141] libmachine: (ha-044175-m03)       <source network='default'/>
	I0805 23:12:16.496682   28839 main.go:141] libmachine: (ha-044175-m03)       <model type='virtio'/>
	I0805 23:12:16.496694   28839 main.go:141] libmachine: (ha-044175-m03)     </interface>
	I0805 23:12:16.496706   28839 main.go:141] libmachine: (ha-044175-m03)     <serial type='pty'>
	I0805 23:12:16.496718   28839 main.go:141] libmachine: (ha-044175-m03)       <target port='0'/>
	I0805 23:12:16.496729   28839 main.go:141] libmachine: (ha-044175-m03)     </serial>
	I0805 23:12:16.496740   28839 main.go:141] libmachine: (ha-044175-m03)     <console type='pty'>
	I0805 23:12:16.496750   28839 main.go:141] libmachine: (ha-044175-m03)       <target type='serial' port='0'/>
	I0805 23:12:16.496760   28839 main.go:141] libmachine: (ha-044175-m03)     </console>
	I0805 23:12:16.496771   28839 main.go:141] libmachine: (ha-044175-m03)     <rng model='virtio'>
	I0805 23:12:16.496788   28839 main.go:141] libmachine: (ha-044175-m03)       <backend model='random'>/dev/random</backend>
	I0805 23:12:16.496805   28839 main.go:141] libmachine: (ha-044175-m03)     </rng>
	I0805 23:12:16.496817   28839 main.go:141] libmachine: (ha-044175-m03)     
	I0805 23:12:16.496822   28839 main.go:141] libmachine: (ha-044175-m03)     
	I0805 23:12:16.496833   28839 main.go:141] libmachine: (ha-044175-m03)   </devices>
	I0805 23:12:16.496842   28839 main.go:141] libmachine: (ha-044175-m03) </domain>
	I0805 23:12:16.496852   28839 main.go:141] libmachine: (ha-044175-m03) 
	I0805 23:12:16.503725   28839 main.go:141] libmachine: (ha-044175-m03) DBG | domain ha-044175-m03 has defined MAC address 52:54:00:6b:ba:6d in network default
	I0805 23:12:16.504450   28839 main.go:141] libmachine: (ha-044175-m03) Ensuring networks are active...
	I0805 23:12:16.504492   28839 main.go:141] libmachine: (ha-044175-m03) DBG | domain ha-044175-m03 has defined MAC address 52:54:00:f4:37:04 in network mk-ha-044175
	I0805 23:12:16.505339   28839 main.go:141] libmachine: (ha-044175-m03) Ensuring network default is active
	I0805 23:12:16.505649   28839 main.go:141] libmachine: (ha-044175-m03) Ensuring network mk-ha-044175 is active
	I0805 23:12:16.506103   28839 main.go:141] libmachine: (ha-044175-m03) Getting domain xml...
	I0805 23:12:16.506891   28839 main.go:141] libmachine: (ha-044175-m03) Creating domain...
	I0805 23:12:17.726625   28839 main.go:141] libmachine: (ha-044175-m03) Waiting to get IP...
	I0805 23:12:17.727449   28839 main.go:141] libmachine: (ha-044175-m03) DBG | domain ha-044175-m03 has defined MAC address 52:54:00:f4:37:04 in network mk-ha-044175
	I0805 23:12:17.727898   28839 main.go:141] libmachine: (ha-044175-m03) DBG | unable to find current IP address of domain ha-044175-m03 in network mk-ha-044175
	I0805 23:12:17.727926   28839 main.go:141] libmachine: (ha-044175-m03) DBG | I0805 23:12:17.727866   29643 retry.go:31] will retry after 203.767559ms: waiting for machine to come up
	I0805 23:12:17.933384   28839 main.go:141] libmachine: (ha-044175-m03) DBG | domain ha-044175-m03 has defined MAC address 52:54:00:f4:37:04 in network mk-ha-044175
	I0805 23:12:17.933880   28839 main.go:141] libmachine: (ha-044175-m03) DBG | unable to find current IP address of domain ha-044175-m03 in network mk-ha-044175
	I0805 23:12:17.933902   28839 main.go:141] libmachine: (ha-044175-m03) DBG | I0805 23:12:17.933844   29643 retry.go:31] will retry after 239.798979ms: waiting for machine to come up
	I0805 23:12:18.175419   28839 main.go:141] libmachine: (ha-044175-m03) DBG | domain ha-044175-m03 has defined MAC address 52:54:00:f4:37:04 in network mk-ha-044175
	I0805 23:12:18.175845   28839 main.go:141] libmachine: (ha-044175-m03) DBG | unable to find current IP address of domain ha-044175-m03 in network mk-ha-044175
	I0805 23:12:18.175870   28839 main.go:141] libmachine: (ha-044175-m03) DBG | I0805 23:12:18.175792   29643 retry.go:31] will retry after 326.454439ms: waiting for machine to come up
	I0805 23:12:18.504326   28839 main.go:141] libmachine: (ha-044175-m03) DBG | domain ha-044175-m03 has defined MAC address 52:54:00:f4:37:04 in network mk-ha-044175
	I0805 23:12:18.504792   28839 main.go:141] libmachine: (ha-044175-m03) DBG | unable to find current IP address of domain ha-044175-m03 in network mk-ha-044175
	I0805 23:12:18.504831   28839 main.go:141] libmachine: (ha-044175-m03) DBG | I0805 23:12:18.504766   29643 retry.go:31] will retry after 426.319717ms: waiting for machine to come up
	I0805 23:12:18.932425   28839 main.go:141] libmachine: (ha-044175-m03) DBG | domain ha-044175-m03 has defined MAC address 52:54:00:f4:37:04 in network mk-ha-044175
	I0805 23:12:18.932894   28839 main.go:141] libmachine: (ha-044175-m03) DBG | unable to find current IP address of domain ha-044175-m03 in network mk-ha-044175
	I0805 23:12:18.932928   28839 main.go:141] libmachine: (ha-044175-m03) DBG | I0805 23:12:18.932825   29643 retry.go:31] will retry after 613.530654ms: waiting for machine to come up
	I0805 23:12:19.547501   28839 main.go:141] libmachine: (ha-044175-m03) DBG | domain ha-044175-m03 has defined MAC address 52:54:00:f4:37:04 in network mk-ha-044175
	I0805 23:12:19.547980   28839 main.go:141] libmachine: (ha-044175-m03) DBG | unable to find current IP address of domain ha-044175-m03 in network mk-ha-044175
	I0805 23:12:19.548048   28839 main.go:141] libmachine: (ha-044175-m03) DBG | I0805 23:12:19.547951   29643 retry.go:31] will retry after 668.13083ms: waiting for machine to come up
	I0805 23:12:20.217948   28839 main.go:141] libmachine: (ha-044175-m03) DBG | domain ha-044175-m03 has defined MAC address 52:54:00:f4:37:04 in network mk-ha-044175
	I0805 23:12:20.218511   28839 main.go:141] libmachine: (ha-044175-m03) DBG | unable to find current IP address of domain ha-044175-m03 in network mk-ha-044175
	I0805 23:12:20.218535   28839 main.go:141] libmachine: (ha-044175-m03) DBG | I0805 23:12:20.218463   29643 retry.go:31] will retry after 1.100630535s: waiting for machine to come up
	I0805 23:12:21.320924   28839 main.go:141] libmachine: (ha-044175-m03) DBG | domain ha-044175-m03 has defined MAC address 52:54:00:f4:37:04 in network mk-ha-044175
	I0805 23:12:21.321377   28839 main.go:141] libmachine: (ha-044175-m03) DBG | unable to find current IP address of domain ha-044175-m03 in network mk-ha-044175
	I0805 23:12:21.321401   28839 main.go:141] libmachine: (ha-044175-m03) DBG | I0805 23:12:21.321295   29643 retry.go:31] will retry after 1.235967589s: waiting for machine to come up
	I0805 23:12:22.558632   28839 main.go:141] libmachine: (ha-044175-m03) DBG | domain ha-044175-m03 has defined MAC address 52:54:00:f4:37:04 in network mk-ha-044175
	I0805 23:12:22.559094   28839 main.go:141] libmachine: (ha-044175-m03) DBG | unable to find current IP address of domain ha-044175-m03 in network mk-ha-044175
	I0805 23:12:22.559115   28839 main.go:141] libmachine: (ha-044175-m03) DBG | I0805 23:12:22.559042   29643 retry.go:31] will retry after 1.216988644s: waiting for machine to come up
	I0805 23:12:23.777210   28839 main.go:141] libmachine: (ha-044175-m03) DBG | domain ha-044175-m03 has defined MAC address 52:54:00:f4:37:04 in network mk-ha-044175
	I0805 23:12:23.777638   28839 main.go:141] libmachine: (ha-044175-m03) DBG | unable to find current IP address of domain ha-044175-m03 in network mk-ha-044175
	I0805 23:12:23.777663   28839 main.go:141] libmachine: (ha-044175-m03) DBG | I0805 23:12:23.777586   29643 retry.go:31] will retry after 2.095063584s: waiting for machine to come up
	I0805 23:12:25.875961   28839 main.go:141] libmachine: (ha-044175-m03) DBG | domain ha-044175-m03 has defined MAC address 52:54:00:f4:37:04 in network mk-ha-044175
	I0805 23:12:25.876431   28839 main.go:141] libmachine: (ha-044175-m03) DBG | unable to find current IP address of domain ha-044175-m03 in network mk-ha-044175
	I0805 23:12:25.876456   28839 main.go:141] libmachine: (ha-044175-m03) DBG | I0805 23:12:25.876393   29643 retry.go:31] will retry after 1.975393786s: waiting for machine to come up
	I0805 23:12:27.853735   28839 main.go:141] libmachine: (ha-044175-m03) DBG | domain ha-044175-m03 has defined MAC address 52:54:00:f4:37:04 in network mk-ha-044175
	I0805 23:12:27.854234   28839 main.go:141] libmachine: (ha-044175-m03) DBG | unable to find current IP address of domain ha-044175-m03 in network mk-ha-044175
	I0805 23:12:27.854259   28839 main.go:141] libmachine: (ha-044175-m03) DBG | I0805 23:12:27.854195   29643 retry.go:31] will retry after 2.248104101s: waiting for machine to come up
	I0805 23:12:30.103437   28839 main.go:141] libmachine: (ha-044175-m03) DBG | domain ha-044175-m03 has defined MAC address 52:54:00:f4:37:04 in network mk-ha-044175
	I0805 23:12:30.103846   28839 main.go:141] libmachine: (ha-044175-m03) DBG | unable to find current IP address of domain ha-044175-m03 in network mk-ha-044175
	I0805 23:12:30.103861   28839 main.go:141] libmachine: (ha-044175-m03) DBG | I0805 23:12:30.103817   29643 retry.go:31] will retry after 2.931156145s: waiting for machine to come up
	I0805 23:12:33.036613   28839 main.go:141] libmachine: (ha-044175-m03) DBG | domain ha-044175-m03 has defined MAC address 52:54:00:f4:37:04 in network mk-ha-044175
	I0805 23:12:33.037025   28839 main.go:141] libmachine: (ha-044175-m03) DBG | unable to find current IP address of domain ha-044175-m03 in network mk-ha-044175
	I0805 23:12:33.037049   28839 main.go:141] libmachine: (ha-044175-m03) DBG | I0805 23:12:33.036982   29643 retry.go:31] will retry after 4.276164676s: waiting for machine to come up
	I0805 23:12:37.314725   28839 main.go:141] libmachine: (ha-044175-m03) DBG | domain ha-044175-m03 has defined MAC address 52:54:00:f4:37:04 in network mk-ha-044175
	I0805 23:12:37.315250   28839 main.go:141] libmachine: (ha-044175-m03) Found IP for machine: 192.168.39.201
	I0805 23:12:37.315282   28839 main.go:141] libmachine: (ha-044175-m03) DBG | domain ha-044175-m03 has current primary IP address 192.168.39.201 and MAC address 52:54:00:f4:37:04 in network mk-ha-044175
	I0805 23:12:37.315293   28839 main.go:141] libmachine: (ha-044175-m03) Reserving static IP address...
	I0805 23:12:37.315644   28839 main.go:141] libmachine: (ha-044175-m03) DBG | unable to find host DHCP lease matching {name: "ha-044175-m03", mac: "52:54:00:f4:37:04", ip: "192.168.39.201"} in network mk-ha-044175
	I0805 23:12:37.392179   28839 main.go:141] libmachine: (ha-044175-m03) DBG | Getting to WaitForSSH function...
	I0805 23:12:37.392213   28839 main.go:141] libmachine: (ha-044175-m03) Reserved static IP address: 192.168.39.201
	I0805 23:12:37.392225   28839 main.go:141] libmachine: (ha-044175-m03) Waiting for SSH to be available...
	I0805 23:12:37.395001   28839 main.go:141] libmachine: (ha-044175-m03) DBG | domain ha-044175-m03 has defined MAC address 52:54:00:f4:37:04 in network mk-ha-044175
	I0805 23:12:37.395500   28839 main.go:141] libmachine: (ha-044175-m03) DBG | unable to find host DHCP lease matching {name: "", mac: "52:54:00:f4:37:04", ip: ""} in network mk-ha-044175
	I0805 23:12:37.395530   28839 main.go:141] libmachine: (ha-044175-m03) DBG | unable to find defined IP address of network mk-ha-044175 interface with MAC address 52:54:00:f4:37:04
	I0805 23:12:37.395654   28839 main.go:141] libmachine: (ha-044175-m03) DBG | Using SSH client type: external
	I0805 23:12:37.395676   28839 main.go:141] libmachine: (ha-044175-m03) DBG | Using SSH private key: /home/jenkins/minikube-integration/19373-9606/.minikube/machines/ha-044175-m03/id_rsa (-rw-------)
	I0805 23:12:37.395706   28839 main.go:141] libmachine: (ha-044175-m03) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@ -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19373-9606/.minikube/machines/ha-044175-m03/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0805 23:12:37.395720   28839 main.go:141] libmachine: (ha-044175-m03) DBG | About to run SSH command:
	I0805 23:12:37.395738   28839 main.go:141] libmachine: (ha-044175-m03) DBG | exit 0
	I0805 23:12:37.399962   28839 main.go:141] libmachine: (ha-044175-m03) DBG | SSH cmd err, output: exit status 255: 
	I0805 23:12:37.399985   28839 main.go:141] libmachine: (ha-044175-m03) DBG | Error getting ssh command 'exit 0' : ssh command error:
	I0805 23:12:37.399996   28839 main.go:141] libmachine: (ha-044175-m03) DBG | command : exit 0
	I0805 23:12:37.400003   28839 main.go:141] libmachine: (ha-044175-m03) DBG | err     : exit status 255
	I0805 23:12:37.400016   28839 main.go:141] libmachine: (ha-044175-m03) DBG | output  : 
	I0805 23:12:40.400584   28839 main.go:141] libmachine: (ha-044175-m03) DBG | Getting to WaitForSSH function...
	I0805 23:12:40.403127   28839 main.go:141] libmachine: (ha-044175-m03) DBG | domain ha-044175-m03 has defined MAC address 52:54:00:f4:37:04 in network mk-ha-044175
	I0805 23:12:40.403457   28839 main.go:141] libmachine: (ha-044175-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f4:37:04", ip: ""} in network mk-ha-044175: {Iface:virbr1 ExpiryTime:2024-08-06 00:12:31 +0000 UTC Type:0 Mac:52:54:00:f4:37:04 Iaid: IPaddr:192.168.39.201 Prefix:24 Hostname:ha-044175-m03 Clientid:01:52:54:00:f4:37:04}
	I0805 23:12:40.403486   28839 main.go:141] libmachine: (ha-044175-m03) DBG | domain ha-044175-m03 has defined IP address 192.168.39.201 and MAC address 52:54:00:f4:37:04 in network mk-ha-044175
	I0805 23:12:40.403644   28839 main.go:141] libmachine: (ha-044175-m03) DBG | Using SSH client type: external
	I0805 23:12:40.403670   28839 main.go:141] libmachine: (ha-044175-m03) DBG | Using SSH private key: /home/jenkins/minikube-integration/19373-9606/.minikube/machines/ha-044175-m03/id_rsa (-rw-------)
	I0805 23:12:40.403700   28839 main.go:141] libmachine: (ha-044175-m03) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.201 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19373-9606/.minikube/machines/ha-044175-m03/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0805 23:12:40.403711   28839 main.go:141] libmachine: (ha-044175-m03) DBG | About to run SSH command:
	I0805 23:12:40.403720   28839 main.go:141] libmachine: (ha-044175-m03) DBG | exit 0
	I0805 23:12:40.531190   28839 main.go:141] libmachine: (ha-044175-m03) DBG | SSH cmd err, output: <nil>: 
	I0805 23:12:40.531412   28839 main.go:141] libmachine: (ha-044175-m03) KVM machine creation complete!
	I0805 23:12:40.531711   28839 main.go:141] libmachine: (ha-044175-m03) Calling .GetConfigRaw
	I0805 23:12:40.532231   28839 main.go:141] libmachine: (ha-044175-m03) Calling .DriverName
	I0805 23:12:40.532423   28839 main.go:141] libmachine: (ha-044175-m03) Calling .DriverName
	I0805 23:12:40.532552   28839 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I0805 23:12:40.532567   28839 main.go:141] libmachine: (ha-044175-m03) Calling .GetState
	I0805 23:12:40.533849   28839 main.go:141] libmachine: Detecting operating system of created instance...
	I0805 23:12:40.533868   28839 main.go:141] libmachine: Waiting for SSH to be available...
	I0805 23:12:40.533882   28839 main.go:141] libmachine: Getting to WaitForSSH function...
	I0805 23:12:40.533890   28839 main.go:141] libmachine: (ha-044175-m03) Calling .GetSSHHostname
	I0805 23:12:40.536165   28839 main.go:141] libmachine: (ha-044175-m03) DBG | domain ha-044175-m03 has defined MAC address 52:54:00:f4:37:04 in network mk-ha-044175
	I0805 23:12:40.536495   28839 main.go:141] libmachine: (ha-044175-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f4:37:04", ip: ""} in network mk-ha-044175: {Iface:virbr1 ExpiryTime:2024-08-06 00:12:31 +0000 UTC Type:0 Mac:52:54:00:f4:37:04 Iaid: IPaddr:192.168.39.201 Prefix:24 Hostname:ha-044175-m03 Clientid:01:52:54:00:f4:37:04}
	I0805 23:12:40.536523   28839 main.go:141] libmachine: (ha-044175-m03) DBG | domain ha-044175-m03 has defined IP address 192.168.39.201 and MAC address 52:54:00:f4:37:04 in network mk-ha-044175
	I0805 23:12:40.536614   28839 main.go:141] libmachine: (ha-044175-m03) Calling .GetSSHPort
	I0805 23:12:40.536821   28839 main.go:141] libmachine: (ha-044175-m03) Calling .GetSSHKeyPath
	I0805 23:12:40.536963   28839 main.go:141] libmachine: (ha-044175-m03) Calling .GetSSHKeyPath
	I0805 23:12:40.537111   28839 main.go:141] libmachine: (ha-044175-m03) Calling .GetSSHUsername
	I0805 23:12:40.537274   28839 main.go:141] libmachine: Using SSH client type: native
	I0805 23:12:40.537507   28839 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.201 22 <nil> <nil>}
	I0805 23:12:40.537518   28839 main.go:141] libmachine: About to run SSH command:
	exit 0
	I0805 23:12:40.650728   28839 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0805 23:12:40.650757   28839 main.go:141] libmachine: Detecting the provisioner...
	I0805 23:12:40.650767   28839 main.go:141] libmachine: (ha-044175-m03) Calling .GetSSHHostname
	I0805 23:12:40.653865   28839 main.go:141] libmachine: (ha-044175-m03) DBG | domain ha-044175-m03 has defined MAC address 52:54:00:f4:37:04 in network mk-ha-044175
	I0805 23:12:40.654319   28839 main.go:141] libmachine: (ha-044175-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f4:37:04", ip: ""} in network mk-ha-044175: {Iface:virbr1 ExpiryTime:2024-08-06 00:12:31 +0000 UTC Type:0 Mac:52:54:00:f4:37:04 Iaid: IPaddr:192.168.39.201 Prefix:24 Hostname:ha-044175-m03 Clientid:01:52:54:00:f4:37:04}
	I0805 23:12:40.654350   28839 main.go:141] libmachine: (ha-044175-m03) DBG | domain ha-044175-m03 has defined IP address 192.168.39.201 and MAC address 52:54:00:f4:37:04 in network mk-ha-044175
	I0805 23:12:40.654464   28839 main.go:141] libmachine: (ha-044175-m03) Calling .GetSSHPort
	I0805 23:12:40.654709   28839 main.go:141] libmachine: (ha-044175-m03) Calling .GetSSHKeyPath
	I0805 23:12:40.654910   28839 main.go:141] libmachine: (ha-044175-m03) Calling .GetSSHKeyPath
	I0805 23:12:40.655085   28839 main.go:141] libmachine: (ha-044175-m03) Calling .GetSSHUsername
	I0805 23:12:40.655267   28839 main.go:141] libmachine: Using SSH client type: native
	I0805 23:12:40.655468   28839 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.201 22 <nil> <nil>}
	I0805 23:12:40.655484   28839 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I0805 23:12:40.772114   28839 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2023.02.9-dirty
	ID=buildroot
	VERSION_ID=2023.02.9
	PRETTY_NAME="Buildroot 2023.02.9"
	
	I0805 23:12:40.772237   28839 main.go:141] libmachine: found compatible host: buildroot
	I0805 23:12:40.772248   28839 main.go:141] libmachine: Provisioning with buildroot...
	I0805 23:12:40.772255   28839 main.go:141] libmachine: (ha-044175-m03) Calling .GetMachineName
	I0805 23:12:40.772507   28839 buildroot.go:166] provisioning hostname "ha-044175-m03"
	I0805 23:12:40.772535   28839 main.go:141] libmachine: (ha-044175-m03) Calling .GetMachineName
	I0805 23:12:40.772738   28839 main.go:141] libmachine: (ha-044175-m03) Calling .GetSSHHostname
	I0805 23:12:40.775382   28839 main.go:141] libmachine: (ha-044175-m03) DBG | domain ha-044175-m03 has defined MAC address 52:54:00:f4:37:04 in network mk-ha-044175
	I0805 23:12:40.775748   28839 main.go:141] libmachine: (ha-044175-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f4:37:04", ip: ""} in network mk-ha-044175: {Iface:virbr1 ExpiryTime:2024-08-06 00:12:31 +0000 UTC Type:0 Mac:52:54:00:f4:37:04 Iaid: IPaddr:192.168.39.201 Prefix:24 Hostname:ha-044175-m03 Clientid:01:52:54:00:f4:37:04}
	I0805 23:12:40.775776   28839 main.go:141] libmachine: (ha-044175-m03) DBG | domain ha-044175-m03 has defined IP address 192.168.39.201 and MAC address 52:54:00:f4:37:04 in network mk-ha-044175
	I0805 23:12:40.776000   28839 main.go:141] libmachine: (ha-044175-m03) Calling .GetSSHPort
	I0805 23:12:40.776189   28839 main.go:141] libmachine: (ha-044175-m03) Calling .GetSSHKeyPath
	I0805 23:12:40.776350   28839 main.go:141] libmachine: (ha-044175-m03) Calling .GetSSHKeyPath
	I0805 23:12:40.776492   28839 main.go:141] libmachine: (ha-044175-m03) Calling .GetSSHUsername
	I0805 23:12:40.776662   28839 main.go:141] libmachine: Using SSH client type: native
	I0805 23:12:40.776820   28839 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.201 22 <nil> <nil>}
	I0805 23:12:40.776832   28839 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-044175-m03 && echo "ha-044175-m03" | sudo tee /etc/hostname
	I0805 23:12:40.911933   28839 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-044175-m03
	
	I0805 23:12:40.911968   28839 main.go:141] libmachine: (ha-044175-m03) Calling .GetSSHHostname
	I0805 23:12:40.914562   28839 main.go:141] libmachine: (ha-044175-m03) DBG | domain ha-044175-m03 has defined MAC address 52:54:00:f4:37:04 in network mk-ha-044175
	I0805 23:12:40.914922   28839 main.go:141] libmachine: (ha-044175-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f4:37:04", ip: ""} in network mk-ha-044175: {Iface:virbr1 ExpiryTime:2024-08-06 00:12:31 +0000 UTC Type:0 Mac:52:54:00:f4:37:04 Iaid: IPaddr:192.168.39.201 Prefix:24 Hostname:ha-044175-m03 Clientid:01:52:54:00:f4:37:04}
	I0805 23:12:40.914947   28839 main.go:141] libmachine: (ha-044175-m03) DBG | domain ha-044175-m03 has defined IP address 192.168.39.201 and MAC address 52:54:00:f4:37:04 in network mk-ha-044175
	I0805 23:12:40.915149   28839 main.go:141] libmachine: (ha-044175-m03) Calling .GetSSHPort
	I0805 23:12:40.915309   28839 main.go:141] libmachine: (ha-044175-m03) Calling .GetSSHKeyPath
	I0805 23:12:40.915474   28839 main.go:141] libmachine: (ha-044175-m03) Calling .GetSSHKeyPath
	I0805 23:12:40.915606   28839 main.go:141] libmachine: (ha-044175-m03) Calling .GetSSHUsername
	I0805 23:12:40.915749   28839 main.go:141] libmachine: Using SSH client type: native
	I0805 23:12:40.915943   28839 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.201 22 <nil> <nil>}
	I0805 23:12:40.915961   28839 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-044175-m03' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-044175-m03/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-044175-m03' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0805 23:12:41.040816   28839 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0805 23:12:41.040846   28839 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19373-9606/.minikube CaCertPath:/home/jenkins/minikube-integration/19373-9606/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19373-9606/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19373-9606/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19373-9606/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19373-9606/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19373-9606/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19373-9606/.minikube}
	I0805 23:12:41.040866   28839 buildroot.go:174] setting up certificates
	I0805 23:12:41.040880   28839 provision.go:84] configureAuth start
	I0805 23:12:41.040894   28839 main.go:141] libmachine: (ha-044175-m03) Calling .GetMachineName
	I0805 23:12:41.041154   28839 main.go:141] libmachine: (ha-044175-m03) Calling .GetIP
	I0805 23:12:41.043913   28839 main.go:141] libmachine: (ha-044175-m03) DBG | domain ha-044175-m03 has defined MAC address 52:54:00:f4:37:04 in network mk-ha-044175
	I0805 23:12:41.044351   28839 main.go:141] libmachine: (ha-044175-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f4:37:04", ip: ""} in network mk-ha-044175: {Iface:virbr1 ExpiryTime:2024-08-06 00:12:31 +0000 UTC Type:0 Mac:52:54:00:f4:37:04 Iaid: IPaddr:192.168.39.201 Prefix:24 Hostname:ha-044175-m03 Clientid:01:52:54:00:f4:37:04}
	I0805 23:12:41.044378   28839 main.go:141] libmachine: (ha-044175-m03) DBG | domain ha-044175-m03 has defined IP address 192.168.39.201 and MAC address 52:54:00:f4:37:04 in network mk-ha-044175
	I0805 23:12:41.044514   28839 main.go:141] libmachine: (ha-044175-m03) Calling .GetSSHHostname
	I0805 23:12:41.046897   28839 main.go:141] libmachine: (ha-044175-m03) DBG | domain ha-044175-m03 has defined MAC address 52:54:00:f4:37:04 in network mk-ha-044175
	I0805 23:12:41.047336   28839 main.go:141] libmachine: (ha-044175-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f4:37:04", ip: ""} in network mk-ha-044175: {Iface:virbr1 ExpiryTime:2024-08-06 00:12:31 +0000 UTC Type:0 Mac:52:54:00:f4:37:04 Iaid: IPaddr:192.168.39.201 Prefix:24 Hostname:ha-044175-m03 Clientid:01:52:54:00:f4:37:04}
	I0805 23:12:41.047357   28839 main.go:141] libmachine: (ha-044175-m03) DBG | domain ha-044175-m03 has defined IP address 192.168.39.201 and MAC address 52:54:00:f4:37:04 in network mk-ha-044175
	I0805 23:12:41.047458   28839 provision.go:143] copyHostCerts
	I0805 23:12:41.047498   28839 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19373-9606/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/19373-9606/.minikube/key.pem
	I0805 23:12:41.047539   28839 exec_runner.go:144] found /home/jenkins/minikube-integration/19373-9606/.minikube/key.pem, removing ...
	I0805 23:12:41.047549   28839 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19373-9606/.minikube/key.pem
	I0805 23:12:41.047612   28839 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19373-9606/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19373-9606/.minikube/key.pem (1679 bytes)
	I0805 23:12:41.047691   28839 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19373-9606/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/19373-9606/.minikube/ca.pem
	I0805 23:12:41.047709   28839 exec_runner.go:144] found /home/jenkins/minikube-integration/19373-9606/.minikube/ca.pem, removing ...
	I0805 23:12:41.047716   28839 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19373-9606/.minikube/ca.pem
	I0805 23:12:41.047741   28839 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19373-9606/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19373-9606/.minikube/ca.pem (1082 bytes)
	I0805 23:12:41.047790   28839 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19373-9606/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/19373-9606/.minikube/cert.pem
	I0805 23:12:41.047812   28839 exec_runner.go:144] found /home/jenkins/minikube-integration/19373-9606/.minikube/cert.pem, removing ...
	I0805 23:12:41.047818   28839 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19373-9606/.minikube/cert.pem
	I0805 23:12:41.047842   28839 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19373-9606/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19373-9606/.minikube/cert.pem (1123 bytes)
	I0805 23:12:41.047913   28839 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19373-9606/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19373-9606/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19373-9606/.minikube/certs/ca-key.pem org=jenkins.ha-044175-m03 san=[127.0.0.1 192.168.39.201 ha-044175-m03 localhost minikube]
	I0805 23:12:41.135263   28839 provision.go:177] copyRemoteCerts
	I0805 23:12:41.135319   28839 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0805 23:12:41.135343   28839 main.go:141] libmachine: (ha-044175-m03) Calling .GetSSHHostname
	I0805 23:12:41.138088   28839 main.go:141] libmachine: (ha-044175-m03) DBG | domain ha-044175-m03 has defined MAC address 52:54:00:f4:37:04 in network mk-ha-044175
	I0805 23:12:41.138415   28839 main.go:141] libmachine: (ha-044175-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f4:37:04", ip: ""} in network mk-ha-044175: {Iface:virbr1 ExpiryTime:2024-08-06 00:12:31 +0000 UTC Type:0 Mac:52:54:00:f4:37:04 Iaid: IPaddr:192.168.39.201 Prefix:24 Hostname:ha-044175-m03 Clientid:01:52:54:00:f4:37:04}
	I0805 23:12:41.138443   28839 main.go:141] libmachine: (ha-044175-m03) DBG | domain ha-044175-m03 has defined IP address 192.168.39.201 and MAC address 52:54:00:f4:37:04 in network mk-ha-044175
	I0805 23:12:41.138639   28839 main.go:141] libmachine: (ha-044175-m03) Calling .GetSSHPort
	I0805 23:12:41.138865   28839 main.go:141] libmachine: (ha-044175-m03) Calling .GetSSHKeyPath
	I0805 23:12:41.139033   28839 main.go:141] libmachine: (ha-044175-m03) Calling .GetSSHUsername
	I0805 23:12:41.139251   28839 sshutil.go:53] new ssh client: &{IP:192.168.39.201 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19373-9606/.minikube/machines/ha-044175-m03/id_rsa Username:docker}
	I0805 23:12:41.229814   28839 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19373-9606/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0805 23:12:41.229892   28839 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19373-9606/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I0805 23:12:41.254889   28839 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19373-9606/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0805 23:12:41.254966   28839 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19373-9606/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0805 23:12:41.280662   28839 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19373-9606/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0805 23:12:41.280736   28839 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19373-9606/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0805 23:12:41.305856   28839 provision.go:87] duration metric: took 264.960326ms to configureAuth
	I0805 23:12:41.305887   28839 buildroot.go:189] setting minikube options for container-runtime
	I0805 23:12:41.306177   28839 config.go:182] Loaded profile config "ha-044175": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0805 23:12:41.306280   28839 main.go:141] libmachine: (ha-044175-m03) Calling .GetSSHHostname
	I0805 23:12:41.308968   28839 main.go:141] libmachine: (ha-044175-m03) DBG | domain ha-044175-m03 has defined MAC address 52:54:00:f4:37:04 in network mk-ha-044175
	I0805 23:12:41.309366   28839 main.go:141] libmachine: (ha-044175-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f4:37:04", ip: ""} in network mk-ha-044175: {Iface:virbr1 ExpiryTime:2024-08-06 00:12:31 +0000 UTC Type:0 Mac:52:54:00:f4:37:04 Iaid: IPaddr:192.168.39.201 Prefix:24 Hostname:ha-044175-m03 Clientid:01:52:54:00:f4:37:04}
	I0805 23:12:41.309395   28839 main.go:141] libmachine: (ha-044175-m03) DBG | domain ha-044175-m03 has defined IP address 192.168.39.201 and MAC address 52:54:00:f4:37:04 in network mk-ha-044175
	I0805 23:12:41.309569   28839 main.go:141] libmachine: (ha-044175-m03) Calling .GetSSHPort
	I0805 23:12:41.309760   28839 main.go:141] libmachine: (ha-044175-m03) Calling .GetSSHKeyPath
	I0805 23:12:41.309961   28839 main.go:141] libmachine: (ha-044175-m03) Calling .GetSSHKeyPath
	I0805 23:12:41.310100   28839 main.go:141] libmachine: (ha-044175-m03) Calling .GetSSHUsername
	I0805 23:12:41.310242   28839 main.go:141] libmachine: Using SSH client type: native
	I0805 23:12:41.310391   28839 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.201 22 <nil> <nil>}
	I0805 23:12:41.310405   28839 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0805 23:12:41.592819   28839 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0805 23:12:41.592845   28839 main.go:141] libmachine: Checking connection to Docker...
	I0805 23:12:41.592856   28839 main.go:141] libmachine: (ha-044175-m03) Calling .GetURL
	I0805 23:12:41.594183   28839 main.go:141] libmachine: (ha-044175-m03) DBG | Using libvirt version 6000000
	I0805 23:12:41.596828   28839 main.go:141] libmachine: (ha-044175-m03) DBG | domain ha-044175-m03 has defined MAC address 52:54:00:f4:37:04 in network mk-ha-044175
	I0805 23:12:41.597298   28839 main.go:141] libmachine: (ha-044175-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f4:37:04", ip: ""} in network mk-ha-044175: {Iface:virbr1 ExpiryTime:2024-08-06 00:12:31 +0000 UTC Type:0 Mac:52:54:00:f4:37:04 Iaid: IPaddr:192.168.39.201 Prefix:24 Hostname:ha-044175-m03 Clientid:01:52:54:00:f4:37:04}
	I0805 23:12:41.597325   28839 main.go:141] libmachine: (ha-044175-m03) DBG | domain ha-044175-m03 has defined IP address 192.168.39.201 and MAC address 52:54:00:f4:37:04 in network mk-ha-044175
	I0805 23:12:41.597478   28839 main.go:141] libmachine: Docker is up and running!
	I0805 23:12:41.597490   28839 main.go:141] libmachine: Reticulating splines...
	I0805 23:12:41.597497   28839 client.go:171] duration metric: took 25.436714553s to LocalClient.Create
	I0805 23:12:41.597524   28839 start.go:167] duration metric: took 25.436787614s to libmachine.API.Create "ha-044175"
	I0805 23:12:41.597536   28839 start.go:293] postStartSetup for "ha-044175-m03" (driver="kvm2")
	I0805 23:12:41.597556   28839 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0805 23:12:41.597571   28839 main.go:141] libmachine: (ha-044175-m03) Calling .DriverName
	I0805 23:12:41.597828   28839 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0805 23:12:41.597853   28839 main.go:141] libmachine: (ha-044175-m03) Calling .GetSSHHostname
	I0805 23:12:41.600379   28839 main.go:141] libmachine: (ha-044175-m03) DBG | domain ha-044175-m03 has defined MAC address 52:54:00:f4:37:04 in network mk-ha-044175
	I0805 23:12:41.600765   28839 main.go:141] libmachine: (ha-044175-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f4:37:04", ip: ""} in network mk-ha-044175: {Iface:virbr1 ExpiryTime:2024-08-06 00:12:31 +0000 UTC Type:0 Mac:52:54:00:f4:37:04 Iaid: IPaddr:192.168.39.201 Prefix:24 Hostname:ha-044175-m03 Clientid:01:52:54:00:f4:37:04}
	I0805 23:12:41.600788   28839 main.go:141] libmachine: (ha-044175-m03) DBG | domain ha-044175-m03 has defined IP address 192.168.39.201 and MAC address 52:54:00:f4:37:04 in network mk-ha-044175
	I0805 23:12:41.600950   28839 main.go:141] libmachine: (ha-044175-m03) Calling .GetSSHPort
	I0805 23:12:41.601183   28839 main.go:141] libmachine: (ha-044175-m03) Calling .GetSSHKeyPath
	I0805 23:12:41.601343   28839 main.go:141] libmachine: (ha-044175-m03) Calling .GetSSHUsername
	I0805 23:12:41.601470   28839 sshutil.go:53] new ssh client: &{IP:192.168.39.201 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19373-9606/.minikube/machines/ha-044175-m03/id_rsa Username:docker}
	I0805 23:12:41.690542   28839 ssh_runner.go:195] Run: cat /etc/os-release
	I0805 23:12:41.694912   28839 info.go:137] Remote host: Buildroot 2023.02.9
	I0805 23:12:41.694939   28839 filesync.go:126] Scanning /home/jenkins/minikube-integration/19373-9606/.minikube/addons for local assets ...
	I0805 23:12:41.695008   28839 filesync.go:126] Scanning /home/jenkins/minikube-integration/19373-9606/.minikube/files for local assets ...
	I0805 23:12:41.695114   28839 filesync.go:149] local asset: /home/jenkins/minikube-integration/19373-9606/.minikube/files/etc/ssl/certs/167922.pem -> 167922.pem in /etc/ssl/certs
	I0805 23:12:41.695129   28839 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19373-9606/.minikube/files/etc/ssl/certs/167922.pem -> /etc/ssl/certs/167922.pem
	I0805 23:12:41.695242   28839 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0805 23:12:41.705540   28839 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19373-9606/.minikube/files/etc/ssl/certs/167922.pem --> /etc/ssl/certs/167922.pem (1708 bytes)
	I0805 23:12:41.733699   28839 start.go:296] duration metric: took 136.142198ms for postStartSetup
	I0805 23:12:41.733756   28839 main.go:141] libmachine: (ha-044175-m03) Calling .GetConfigRaw
	I0805 23:12:41.734474   28839 main.go:141] libmachine: (ha-044175-m03) Calling .GetIP
	I0805 23:12:41.737105   28839 main.go:141] libmachine: (ha-044175-m03) DBG | domain ha-044175-m03 has defined MAC address 52:54:00:f4:37:04 in network mk-ha-044175
	I0805 23:12:41.737508   28839 main.go:141] libmachine: (ha-044175-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f4:37:04", ip: ""} in network mk-ha-044175: {Iface:virbr1 ExpiryTime:2024-08-06 00:12:31 +0000 UTC Type:0 Mac:52:54:00:f4:37:04 Iaid: IPaddr:192.168.39.201 Prefix:24 Hostname:ha-044175-m03 Clientid:01:52:54:00:f4:37:04}
	I0805 23:12:41.737530   28839 main.go:141] libmachine: (ha-044175-m03) DBG | domain ha-044175-m03 has defined IP address 192.168.39.201 and MAC address 52:54:00:f4:37:04 in network mk-ha-044175
	I0805 23:12:41.737826   28839 profile.go:143] Saving config to /home/jenkins/minikube-integration/19373-9606/.minikube/profiles/ha-044175/config.json ...
	I0805 23:12:41.738043   28839 start.go:128] duration metric: took 25.597496393s to createHost
	I0805 23:12:41.738069   28839 main.go:141] libmachine: (ha-044175-m03) Calling .GetSSHHostname
	I0805 23:12:41.740252   28839 main.go:141] libmachine: (ha-044175-m03) DBG | domain ha-044175-m03 has defined MAC address 52:54:00:f4:37:04 in network mk-ha-044175
	I0805 23:12:41.740581   28839 main.go:141] libmachine: (ha-044175-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f4:37:04", ip: ""} in network mk-ha-044175: {Iface:virbr1 ExpiryTime:2024-08-06 00:12:31 +0000 UTC Type:0 Mac:52:54:00:f4:37:04 Iaid: IPaddr:192.168.39.201 Prefix:24 Hostname:ha-044175-m03 Clientid:01:52:54:00:f4:37:04}
	I0805 23:12:41.740606   28839 main.go:141] libmachine: (ha-044175-m03) DBG | domain ha-044175-m03 has defined IP address 192.168.39.201 and MAC address 52:54:00:f4:37:04 in network mk-ha-044175
	I0805 23:12:41.740704   28839 main.go:141] libmachine: (ha-044175-m03) Calling .GetSSHPort
	I0805 23:12:41.740906   28839 main.go:141] libmachine: (ha-044175-m03) Calling .GetSSHKeyPath
	I0805 23:12:41.741078   28839 main.go:141] libmachine: (ha-044175-m03) Calling .GetSSHKeyPath
	I0805 23:12:41.741217   28839 main.go:141] libmachine: (ha-044175-m03) Calling .GetSSHUsername
	I0805 23:12:41.741374   28839 main.go:141] libmachine: Using SSH client type: native
	I0805 23:12:41.741544   28839 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.201 22 <nil> <nil>}
	I0805 23:12:41.741557   28839 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0805 23:12:41.855935   28839 main.go:141] libmachine: SSH cmd err, output: <nil>: 1722899561.810527968
	
	I0805 23:12:41.855959   28839 fix.go:216] guest clock: 1722899561.810527968
	I0805 23:12:41.855970   28839 fix.go:229] Guest: 2024-08-05 23:12:41.810527968 +0000 UTC Remote: 2024-08-05 23:12:41.73805629 +0000 UTC m=+161.054044407 (delta=72.471678ms)
	I0805 23:12:41.855989   28839 fix.go:200] guest clock delta is within tolerance: 72.471678ms
	I0805 23:12:41.855996   28839 start.go:83] releasing machines lock for "ha-044175-m03", held for 25.715587212s
	I0805 23:12:41.856020   28839 main.go:141] libmachine: (ha-044175-m03) Calling .DriverName
	I0805 23:12:41.856341   28839 main.go:141] libmachine: (ha-044175-m03) Calling .GetIP
	I0805 23:12:41.859354   28839 main.go:141] libmachine: (ha-044175-m03) DBG | domain ha-044175-m03 has defined MAC address 52:54:00:f4:37:04 in network mk-ha-044175
	I0805 23:12:41.859743   28839 main.go:141] libmachine: (ha-044175-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f4:37:04", ip: ""} in network mk-ha-044175: {Iface:virbr1 ExpiryTime:2024-08-06 00:12:31 +0000 UTC Type:0 Mac:52:54:00:f4:37:04 Iaid: IPaddr:192.168.39.201 Prefix:24 Hostname:ha-044175-m03 Clientid:01:52:54:00:f4:37:04}
	I0805 23:12:41.859771   28839 main.go:141] libmachine: (ha-044175-m03) DBG | domain ha-044175-m03 has defined IP address 192.168.39.201 and MAC address 52:54:00:f4:37:04 in network mk-ha-044175
	I0805 23:12:41.861941   28839 out.go:177] * Found network options:
	I0805 23:12:41.863319   28839 out.go:177]   - NO_PROXY=192.168.39.57,192.168.39.112
	W0805 23:12:41.864893   28839 proxy.go:119] fail to check proxy env: Error ip not in block
	W0805 23:12:41.864921   28839 proxy.go:119] fail to check proxy env: Error ip not in block
	I0805 23:12:41.864938   28839 main.go:141] libmachine: (ha-044175-m03) Calling .DriverName
	I0805 23:12:41.865418   28839 main.go:141] libmachine: (ha-044175-m03) Calling .DriverName
	I0805 23:12:41.865628   28839 main.go:141] libmachine: (ha-044175-m03) Calling .DriverName
	I0805 23:12:41.865738   28839 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0805 23:12:41.865802   28839 main.go:141] libmachine: (ha-044175-m03) Calling .GetSSHHostname
	W0805 23:12:41.865800   28839 proxy.go:119] fail to check proxy env: Error ip not in block
	W0805 23:12:41.865846   28839 proxy.go:119] fail to check proxy env: Error ip not in block
	I0805 23:12:41.865945   28839 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0805 23:12:41.865967   28839 main.go:141] libmachine: (ha-044175-m03) Calling .GetSSHHostname
	I0805 23:12:41.868825   28839 main.go:141] libmachine: (ha-044175-m03) DBG | domain ha-044175-m03 has defined MAC address 52:54:00:f4:37:04 in network mk-ha-044175
	I0805 23:12:41.868845   28839 main.go:141] libmachine: (ha-044175-m03) DBG | domain ha-044175-m03 has defined MAC address 52:54:00:f4:37:04 in network mk-ha-044175
	I0805 23:12:41.869287   28839 main.go:141] libmachine: (ha-044175-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f4:37:04", ip: ""} in network mk-ha-044175: {Iface:virbr1 ExpiryTime:2024-08-06 00:12:31 +0000 UTC Type:0 Mac:52:54:00:f4:37:04 Iaid: IPaddr:192.168.39.201 Prefix:24 Hostname:ha-044175-m03 Clientid:01:52:54:00:f4:37:04}
	I0805 23:12:41.869313   28839 main.go:141] libmachine: (ha-044175-m03) DBG | domain ha-044175-m03 has defined IP address 192.168.39.201 and MAC address 52:54:00:f4:37:04 in network mk-ha-044175
	I0805 23:12:41.869337   28839 main.go:141] libmachine: (ha-044175-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f4:37:04", ip: ""} in network mk-ha-044175: {Iface:virbr1 ExpiryTime:2024-08-06 00:12:31 +0000 UTC Type:0 Mac:52:54:00:f4:37:04 Iaid: IPaddr:192.168.39.201 Prefix:24 Hostname:ha-044175-m03 Clientid:01:52:54:00:f4:37:04}
	I0805 23:12:41.869355   28839 main.go:141] libmachine: (ha-044175-m03) DBG | domain ha-044175-m03 has defined IP address 192.168.39.201 and MAC address 52:54:00:f4:37:04 in network mk-ha-044175
	I0805 23:12:41.869435   28839 main.go:141] libmachine: (ha-044175-m03) Calling .GetSSHPort
	I0805 23:12:41.869563   28839 main.go:141] libmachine: (ha-044175-m03) Calling .GetSSHPort
	I0805 23:12:41.869643   28839 main.go:141] libmachine: (ha-044175-m03) Calling .GetSSHKeyPath
	I0805 23:12:41.869711   28839 main.go:141] libmachine: (ha-044175-m03) Calling .GetSSHKeyPath
	I0805 23:12:41.869772   28839 main.go:141] libmachine: (ha-044175-m03) Calling .GetSSHUsername
	I0805 23:12:41.869836   28839 main.go:141] libmachine: (ha-044175-m03) Calling .GetSSHUsername
	I0805 23:12:41.869893   28839 sshutil.go:53] new ssh client: &{IP:192.168.39.201 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19373-9606/.minikube/machines/ha-044175-m03/id_rsa Username:docker}
	I0805 23:12:41.869990   28839 sshutil.go:53] new ssh client: &{IP:192.168.39.201 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19373-9606/.minikube/machines/ha-044175-m03/id_rsa Username:docker}
	I0805 23:12:42.115392   28839 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0805 23:12:42.121242   28839 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0805 23:12:42.121300   28839 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0805 23:12:42.138419   28839 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0805 23:12:42.138445   28839 start.go:495] detecting cgroup driver to use...
	I0805 23:12:42.138512   28839 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0805 23:12:42.154940   28839 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0805 23:12:42.171891   28839 docker.go:217] disabling cri-docker service (if available) ...
	I0805 23:12:42.171955   28839 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0805 23:12:42.187452   28839 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0805 23:12:42.203635   28839 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0805 23:12:42.331363   28839 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0805 23:12:42.486729   28839 docker.go:233] disabling docker service ...
	I0805 23:12:42.486813   28839 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0805 23:12:42.502563   28839 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0805 23:12:42.516833   28839 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0805 23:12:42.653003   28839 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0805 23:12:42.782159   28839 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0805 23:12:42.797842   28839 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0805 23:12:42.816825   28839 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I0805 23:12:42.816891   28839 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0805 23:12:42.827670   28839 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0805 23:12:42.827745   28839 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0805 23:12:42.838303   28839 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0805 23:12:42.849311   28839 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0805 23:12:42.860901   28839 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0805 23:12:42.871683   28839 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0805 23:12:42.883404   28839 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0805 23:12:42.903914   28839 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0805 23:12:42.914926   28839 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0805 23:12:42.924481   28839 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0805 23:12:42.924551   28839 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0805 23:12:42.937387   28839 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0805 23:12:42.947466   28839 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0805 23:12:43.065481   28839 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0805 23:12:43.220596   28839 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0805 23:12:43.220678   28839 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0805 23:12:43.225780   28839 start.go:563] Will wait 60s for crictl version
	I0805 23:12:43.225839   28839 ssh_runner.go:195] Run: which crictl
	I0805 23:12:43.229784   28839 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0805 23:12:43.273939   28839 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0805 23:12:43.274031   28839 ssh_runner.go:195] Run: crio --version
	I0805 23:12:43.306047   28839 ssh_runner.go:195] Run: crio --version
	I0805 23:12:43.338481   28839 out.go:177] * Preparing Kubernetes v1.30.3 on CRI-O 1.29.1 ...
	I0805 23:12:43.340246   28839 out.go:177]   - env NO_PROXY=192.168.39.57
	I0805 23:12:43.341615   28839 out.go:177]   - env NO_PROXY=192.168.39.57,192.168.39.112
	I0805 23:12:43.343026   28839 main.go:141] libmachine: (ha-044175-m03) Calling .GetIP
	I0805 23:12:43.346432   28839 main.go:141] libmachine: (ha-044175-m03) DBG | domain ha-044175-m03 has defined MAC address 52:54:00:f4:37:04 in network mk-ha-044175
	I0805 23:12:43.346881   28839 main.go:141] libmachine: (ha-044175-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f4:37:04", ip: ""} in network mk-ha-044175: {Iface:virbr1 ExpiryTime:2024-08-06 00:12:31 +0000 UTC Type:0 Mac:52:54:00:f4:37:04 Iaid: IPaddr:192.168.39.201 Prefix:24 Hostname:ha-044175-m03 Clientid:01:52:54:00:f4:37:04}
	I0805 23:12:43.346908   28839 main.go:141] libmachine: (ha-044175-m03) DBG | domain ha-044175-m03 has defined IP address 192.168.39.201 and MAC address 52:54:00:f4:37:04 in network mk-ha-044175
	I0805 23:12:43.347212   28839 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0805 23:12:43.351889   28839 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0805 23:12:43.365732   28839 mustload.go:65] Loading cluster: ha-044175
	I0805 23:12:43.365972   28839 config.go:182] Loaded profile config "ha-044175": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0805 23:12:43.366273   28839 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0805 23:12:43.366316   28839 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0805 23:12:43.380599   28839 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35391
	I0805 23:12:43.381039   28839 main.go:141] libmachine: () Calling .GetVersion
	I0805 23:12:43.381493   28839 main.go:141] libmachine: Using API Version  1
	I0805 23:12:43.381519   28839 main.go:141] libmachine: () Calling .SetConfigRaw
	I0805 23:12:43.381916   28839 main.go:141] libmachine: () Calling .GetMachineName
	I0805 23:12:43.382176   28839 main.go:141] libmachine: (ha-044175) Calling .GetState
	I0805 23:12:43.383977   28839 host.go:66] Checking if "ha-044175" exists ...
	I0805 23:12:43.384352   28839 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0805 23:12:43.384402   28839 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0805 23:12:43.399465   28839 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:32885
	I0805 23:12:43.399903   28839 main.go:141] libmachine: () Calling .GetVersion
	I0805 23:12:43.400360   28839 main.go:141] libmachine: Using API Version  1
	I0805 23:12:43.400384   28839 main.go:141] libmachine: () Calling .SetConfigRaw
	I0805 23:12:43.400658   28839 main.go:141] libmachine: () Calling .GetMachineName
	I0805 23:12:43.400818   28839 main.go:141] libmachine: (ha-044175) Calling .DriverName
	I0805 23:12:43.400963   28839 certs.go:68] Setting up /home/jenkins/minikube-integration/19373-9606/.minikube/profiles/ha-044175 for IP: 192.168.39.201
	I0805 23:12:43.400972   28839 certs.go:194] generating shared ca certs ...
	I0805 23:12:43.400984   28839 certs.go:226] acquiring lock for ca certs: {Name:mkf35a042c1656d191f542eee7fa087aad4d29d8 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0805 23:12:43.401114   28839 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19373-9606/.minikube/ca.key
	I0805 23:12:43.401169   28839 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19373-9606/.minikube/proxy-client-ca.key
	I0805 23:12:43.401182   28839 certs.go:256] generating profile certs ...
	I0805 23:12:43.401266   28839 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19373-9606/.minikube/profiles/ha-044175/client.key
	I0805 23:12:43.401298   28839 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/19373-9606/.minikube/profiles/ha-044175/apiserver.key.ce298ff1
	I0805 23:12:43.401313   28839 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19373-9606/.minikube/profiles/ha-044175/apiserver.crt.ce298ff1 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.39.57 192.168.39.112 192.168.39.201 192.168.39.254]
	I0805 23:12:43.614914   28839 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19373-9606/.minikube/profiles/ha-044175/apiserver.crt.ce298ff1 ...
	I0805 23:12:43.614942   28839 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19373-9606/.minikube/profiles/ha-044175/apiserver.crt.ce298ff1: {Name:mkb3dfb2f5fd0b26a6a36cb6f006f2202db1b3f5 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0805 23:12:43.615116   28839 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19373-9606/.minikube/profiles/ha-044175/apiserver.key.ce298ff1 ...
	I0805 23:12:43.615132   28839 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19373-9606/.minikube/profiles/ha-044175/apiserver.key.ce298ff1: {Name:mkc9fb59d0e5374772bfc7d4f2f4f67d3ffc06b6 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0805 23:12:43.615210   28839 certs.go:381] copying /home/jenkins/minikube-integration/19373-9606/.minikube/profiles/ha-044175/apiserver.crt.ce298ff1 -> /home/jenkins/minikube-integration/19373-9606/.minikube/profiles/ha-044175/apiserver.crt
	I0805 23:12:43.615329   28839 certs.go:385] copying /home/jenkins/minikube-integration/19373-9606/.minikube/profiles/ha-044175/apiserver.key.ce298ff1 -> /home/jenkins/minikube-integration/19373-9606/.minikube/profiles/ha-044175/apiserver.key
	I0805 23:12:43.615451   28839 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19373-9606/.minikube/profiles/ha-044175/proxy-client.key
	I0805 23:12:43.615465   28839 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19373-9606/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I0805 23:12:43.615478   28839 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19373-9606/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I0805 23:12:43.615491   28839 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19373-9606/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0805 23:12:43.615504   28839 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19373-9606/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0805 23:12:43.615516   28839 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19373-9606/.minikube/profiles/ha-044175/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0805 23:12:43.615529   28839 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19373-9606/.minikube/profiles/ha-044175/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0805 23:12:43.615541   28839 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19373-9606/.minikube/profiles/ha-044175/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0805 23:12:43.615553   28839 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19373-9606/.minikube/profiles/ha-044175/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0805 23:12:43.615605   28839 certs.go:484] found cert: /home/jenkins/minikube-integration/19373-9606/.minikube/certs/16792.pem (1338 bytes)
	W0805 23:12:43.615631   28839 certs.go:480] ignoring /home/jenkins/minikube-integration/19373-9606/.minikube/certs/16792_empty.pem, impossibly tiny 0 bytes
	I0805 23:12:43.615640   28839 certs.go:484] found cert: /home/jenkins/minikube-integration/19373-9606/.minikube/certs/ca-key.pem (1679 bytes)
	I0805 23:12:43.615662   28839 certs.go:484] found cert: /home/jenkins/minikube-integration/19373-9606/.minikube/certs/ca.pem (1082 bytes)
	I0805 23:12:43.615682   28839 certs.go:484] found cert: /home/jenkins/minikube-integration/19373-9606/.minikube/certs/cert.pem (1123 bytes)
	I0805 23:12:43.615702   28839 certs.go:484] found cert: /home/jenkins/minikube-integration/19373-9606/.minikube/certs/key.pem (1679 bytes)
	I0805 23:12:43.615737   28839 certs.go:484] found cert: /home/jenkins/minikube-integration/19373-9606/.minikube/files/etc/ssl/certs/167922.pem (1708 bytes)
	I0805 23:12:43.615761   28839 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19373-9606/.minikube/certs/16792.pem -> /usr/share/ca-certificates/16792.pem
	I0805 23:12:43.615774   28839 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19373-9606/.minikube/files/etc/ssl/certs/167922.pem -> /usr/share/ca-certificates/167922.pem
	I0805 23:12:43.615788   28839 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19373-9606/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0805 23:12:43.615821   28839 main.go:141] libmachine: (ha-044175) Calling .GetSSHHostname
	I0805 23:12:43.618655   28839 main.go:141] libmachine: (ha-044175) DBG | domain ha-044175 has defined MAC address 52:54:00:d0:5f:e4 in network mk-ha-044175
	I0805 23:12:43.619118   28839 main.go:141] libmachine: (ha-044175) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d0:5f:e4", ip: ""} in network mk-ha-044175: {Iface:virbr1 ExpiryTime:2024-08-06 00:10:15 +0000 UTC Type:0 Mac:52:54:00:d0:5f:e4 Iaid: IPaddr:192.168.39.57 Prefix:24 Hostname:ha-044175 Clientid:01:52:54:00:d0:5f:e4}
	I0805 23:12:43.619144   28839 main.go:141] libmachine: (ha-044175) DBG | domain ha-044175 has defined IP address 192.168.39.57 and MAC address 52:54:00:d0:5f:e4 in network mk-ha-044175
	I0805 23:12:43.619328   28839 main.go:141] libmachine: (ha-044175) Calling .GetSSHPort
	I0805 23:12:43.619495   28839 main.go:141] libmachine: (ha-044175) Calling .GetSSHKeyPath
	I0805 23:12:43.619657   28839 main.go:141] libmachine: (ha-044175) Calling .GetSSHUsername
	I0805 23:12:43.619755   28839 sshutil.go:53] new ssh client: &{IP:192.168.39.57 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19373-9606/.minikube/machines/ha-044175/id_rsa Username:docker}
	I0805 23:12:43.691394   28839 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/sa.pub
	I0805 23:12:43.697185   28839 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.pub --> memory (451 bytes)
	I0805 23:12:43.709581   28839 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/sa.key
	I0805 23:12:43.714326   28839 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.key --> memory (1675 bytes)
	I0805 23:12:43.726125   28839 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/front-proxy-ca.crt
	I0805 23:12:43.730719   28839 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.crt --> memory (1123 bytes)
	I0805 23:12:43.749305   28839 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/front-proxy-ca.key
	I0805 23:12:43.753726   28839 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.key --> memory (1675 bytes)
	I0805 23:12:43.764044   28839 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/etcd/ca.crt
	I0805 23:12:43.768596   28839 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.crt --> memory (1094 bytes)
	I0805 23:12:43.781828   28839 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/etcd/ca.key
	I0805 23:12:43.787289   28839 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.key --> memory (1679 bytes)
	I0805 23:12:43.803290   28839 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19373-9606/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0805 23:12:43.831625   28839 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19373-9606/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0805 23:12:43.857350   28839 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19373-9606/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0805 23:12:43.881128   28839 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19373-9606/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0805 23:12:43.905585   28839 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19373-9606/.minikube/profiles/ha-044175/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1444 bytes)
	I0805 23:12:43.929656   28839 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19373-9606/.minikube/profiles/ha-044175/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0805 23:12:43.955804   28839 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19373-9606/.minikube/profiles/ha-044175/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0805 23:12:43.979926   28839 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19373-9606/.minikube/profiles/ha-044175/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0805 23:12:44.004002   28839 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19373-9606/.minikube/certs/16792.pem --> /usr/share/ca-certificates/16792.pem (1338 bytes)
	I0805 23:12:44.029378   28839 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19373-9606/.minikube/files/etc/ssl/certs/167922.pem --> /usr/share/ca-certificates/167922.pem (1708 bytes)
	I0805 23:12:44.058121   28839 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19373-9606/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0805 23:12:44.082413   28839 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.pub (451 bytes)
	I0805 23:12:44.100775   28839 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.key (1675 bytes)
	I0805 23:12:44.119482   28839 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.crt (1123 bytes)
	I0805 23:12:44.136137   28839 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.key (1675 bytes)
	I0805 23:12:44.153074   28839 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.crt (1094 bytes)
	I0805 23:12:44.170347   28839 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.key (1679 bytes)
	I0805 23:12:44.187745   28839 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (744 bytes)
	I0805 23:12:44.205325   28839 ssh_runner.go:195] Run: openssl version
	I0805 23:12:44.211566   28839 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0805 23:12:44.224098   28839 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0805 23:12:44.228763   28839 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Aug  5 22:50 /usr/share/ca-certificates/minikubeCA.pem
	I0805 23:12:44.228825   28839 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0805 23:12:44.234887   28839 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0805 23:12:44.246481   28839 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/16792.pem && ln -fs /usr/share/ca-certificates/16792.pem /etc/ssl/certs/16792.pem"
	I0805 23:12:44.257667   28839 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/16792.pem
	I0805 23:12:44.262354   28839 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Aug  5 23:03 /usr/share/ca-certificates/16792.pem
	I0805 23:12:44.262415   28839 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/16792.pem
	I0805 23:12:44.268104   28839 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/16792.pem /etc/ssl/certs/51391683.0"
	I0805 23:12:44.279023   28839 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/167922.pem && ln -fs /usr/share/ca-certificates/167922.pem /etc/ssl/certs/167922.pem"
	I0805 23:12:44.290198   28839 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/167922.pem
	I0805 23:12:44.294735   28839 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Aug  5 23:03 /usr/share/ca-certificates/167922.pem
	I0805 23:12:44.294797   28839 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/167922.pem
	I0805 23:12:44.300670   28839 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/167922.pem /etc/ssl/certs/3ec20f2e.0"
	I0805 23:12:44.311822   28839 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0805 23:12:44.316292   28839 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0805 23:12:44.316337   28839 kubeadm.go:934] updating node {m03 192.168.39.201 8443 v1.30.3 crio true true} ...
	I0805 23:12:44.316414   28839 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.30.3/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=ha-044175-m03 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.201
	
	[Install]
	 config:
	{KubernetesVersion:v1.30.3 ClusterName:ha-044175 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0805 23:12:44.316438   28839 kube-vip.go:115] generating kube-vip config ...
	I0805 23:12:44.316471   28839 ssh_runner.go:195] Run: sudo sh -c "modprobe --all ip_vs ip_vs_rr ip_vs_wrr ip_vs_sh nf_conntrack"
	I0805 23:12:44.334138   28839 kube-vip.go:167] auto-enabling control-plane load-balancing in kube-vip
	I0805 23:12:44.334214   28839 kube-vip.go:137] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_nodename
	      valueFrom:
	        fieldRef:
	          fieldPath: spec.nodeName
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 192.168.39.254
	    - name: prometheus_server
	      value: :2112
	    - name : lb_enable
	      value: "true"
	    - name: lb_port
	      value: "8443"
	    image: ghcr.io/kube-vip/kube-vip:v0.8.0
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/admin.conf"
	    name: kubeconfig
	status: {}
	I0805 23:12:44.334273   28839 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.30.3
	I0805 23:12:44.344314   28839 binaries.go:47] Didn't find k8s binaries: sudo ls /var/lib/minikube/binaries/v1.30.3: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/binaries/v1.30.3': No such file or directory
	
	Initiating transfer...
	I0805 23:12:44.344379   28839 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/binaries/v1.30.3
	I0805 23:12:44.354301   28839 binary.go:74] Not caching binary, using https://dl.k8s.io/release/v1.30.3/bin/linux/amd64/kubectl?checksum=file:https://dl.k8s.io/release/v1.30.3/bin/linux/amd64/kubectl.sha256
	I0805 23:12:44.354334   28839 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19373-9606/.minikube/cache/linux/amd64/v1.30.3/kubectl -> /var/lib/minikube/binaries/v1.30.3/kubectl
	I0805 23:12:44.354334   28839 binary.go:74] Not caching binary, using https://dl.k8s.io/release/v1.30.3/bin/linux/amd64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.30.3/bin/linux/amd64/kubeadm.sha256
	I0805 23:12:44.354351   28839 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19373-9606/.minikube/cache/linux/amd64/v1.30.3/kubeadm -> /var/lib/minikube/binaries/v1.30.3/kubeadm
	I0805 23:12:44.354421   28839 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.30.3/kubeadm
	I0805 23:12:44.354426   28839 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.30.3/kubectl
	I0805 23:12:44.354306   28839 binary.go:74] Not caching binary, using https://dl.k8s.io/release/v1.30.3/bin/linux/amd64/kubelet?checksum=file:https://dl.k8s.io/release/v1.30.3/bin/linux/amd64/kubelet.sha256
	I0805 23:12:44.354475   28839 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0805 23:12:44.370943   28839 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.30.3/kubeadm: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.30.3/kubeadm: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.30.3/kubeadm': No such file or directory
	I0805 23:12:44.370977   28839 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19373-9606/.minikube/cache/linux/amd64/v1.30.3/kubelet -> /var/lib/minikube/binaries/v1.30.3/kubelet
	I0805 23:12:44.370990   28839 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19373-9606/.minikube/cache/linux/amd64/v1.30.3/kubeadm --> /var/lib/minikube/binaries/v1.30.3/kubeadm (50249880 bytes)
	I0805 23:12:44.371016   28839 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.30.3/kubectl: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.30.3/kubectl: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.30.3/kubectl': No such file or directory
	I0805 23:12:44.371059   28839 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19373-9606/.minikube/cache/linux/amd64/v1.30.3/kubectl --> /var/lib/minikube/binaries/v1.30.3/kubectl (51454104 bytes)
	I0805 23:12:44.371086   28839 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.30.3/kubelet
	I0805 23:12:44.407258   28839 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.30.3/kubelet: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.30.3/kubelet: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.30.3/kubelet': No such file or directory
	I0805 23:12:44.407302   28839 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19373-9606/.minikube/cache/linux/amd64/v1.30.3/kubelet --> /var/lib/minikube/binaries/v1.30.3/kubelet (100125080 bytes)
	I0805 23:12:45.341946   28839 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /etc/kubernetes/manifests
	I0805 23:12:45.351916   28839 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (313 bytes)
	I0805 23:12:45.369055   28839 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0805 23:12:45.387580   28839 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1441 bytes)
	I0805 23:12:45.406382   28839 ssh_runner.go:195] Run: grep 192.168.39.254	control-plane.minikube.internal$ /etc/hosts
	I0805 23:12:45.410465   28839 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.254	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0805 23:12:45.424113   28839 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0805 23:12:45.551578   28839 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0805 23:12:45.569837   28839 host.go:66] Checking if "ha-044175" exists ...
	I0805 23:12:45.570306   28839 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0805 23:12:45.570365   28839 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0805 23:12:45.587123   28839 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42951
	I0805 23:12:45.587619   28839 main.go:141] libmachine: () Calling .GetVersion
	I0805 23:12:45.588193   28839 main.go:141] libmachine: Using API Version  1
	I0805 23:12:45.588219   28839 main.go:141] libmachine: () Calling .SetConfigRaw
	I0805 23:12:45.588651   28839 main.go:141] libmachine: () Calling .GetMachineName
	I0805 23:12:45.588877   28839 main.go:141] libmachine: (ha-044175) Calling .DriverName
	I0805 23:12:45.589052   28839 start.go:317] joinCluster: &{Name:ha-044175 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19339/minikube-v1.33.1-1722248113-19339-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.3 Cluster
Name:ha-044175 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.57 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.112 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m03 IP:192.168.39.201 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false i
nspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOp
timizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0805 23:12:45.589217   28839 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm token create --print-join-command --ttl=0"
	I0805 23:12:45.589235   28839 main.go:141] libmachine: (ha-044175) Calling .GetSSHHostname
	I0805 23:12:45.592685   28839 main.go:141] libmachine: (ha-044175) DBG | domain ha-044175 has defined MAC address 52:54:00:d0:5f:e4 in network mk-ha-044175
	I0805 23:12:45.593174   28839 main.go:141] libmachine: (ha-044175) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d0:5f:e4", ip: ""} in network mk-ha-044175: {Iface:virbr1 ExpiryTime:2024-08-06 00:10:15 +0000 UTC Type:0 Mac:52:54:00:d0:5f:e4 Iaid: IPaddr:192.168.39.57 Prefix:24 Hostname:ha-044175 Clientid:01:52:54:00:d0:5f:e4}
	I0805 23:12:45.593202   28839 main.go:141] libmachine: (ha-044175) DBG | domain ha-044175 has defined IP address 192.168.39.57 and MAC address 52:54:00:d0:5f:e4 in network mk-ha-044175
	I0805 23:12:45.593368   28839 main.go:141] libmachine: (ha-044175) Calling .GetSSHPort
	I0805 23:12:45.593539   28839 main.go:141] libmachine: (ha-044175) Calling .GetSSHKeyPath
	I0805 23:12:45.593687   28839 main.go:141] libmachine: (ha-044175) Calling .GetSSHUsername
	I0805 23:12:45.593824   28839 sshutil.go:53] new ssh client: &{IP:192.168.39.57 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19373-9606/.minikube/machines/ha-044175/id_rsa Username:docker}
	I0805 23:12:45.759535   28839 start.go:343] trying to join control-plane node "m03" to cluster: &{Name:m03 IP:192.168.39.201 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0805 23:12:45.759586   28839 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm join control-plane.minikube.internal:8443 --token lo1e8m.q8g3oakxetdywsfy --discovery-token-ca-cert-hash sha256:80c3f4a7caafd825f47d5f536053424d1d775e8da247cc5594b6b717e711fcd3 --ignore-preflight-errors=all --cri-socket unix:///var/run/crio/crio.sock --node-name=ha-044175-m03 --control-plane --apiserver-advertise-address=192.168.39.201 --apiserver-bind-port=8443"
	I0805 23:13:09.340790   28839 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm join control-plane.minikube.internal:8443 --token lo1e8m.q8g3oakxetdywsfy --discovery-token-ca-cert-hash sha256:80c3f4a7caafd825f47d5f536053424d1d775e8da247cc5594b6b717e711fcd3 --ignore-preflight-errors=all --cri-socket unix:///var/run/crio/crio.sock --node-name=ha-044175-m03 --control-plane --apiserver-advertise-address=192.168.39.201 --apiserver-bind-port=8443": (23.581172917s)
	I0805 23:13:09.340833   28839 ssh_runner.go:195] Run: /bin/bash -c "sudo systemctl daemon-reload && sudo systemctl enable kubelet && sudo systemctl start kubelet"
	I0805 23:13:09.922208   28839 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes ha-044175-m03 minikube.k8s.io/updated_at=2024_08_05T23_13_09_0700 minikube.k8s.io/version=v1.33.1 minikube.k8s.io/commit=a179f531dd2dbe55e0d6074abcbc378280f91bb4 minikube.k8s.io/name=ha-044175 minikube.k8s.io/primary=false
	I0805 23:13:10.105507   28839 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig taint nodes ha-044175-m03 node-role.kubernetes.io/control-plane:NoSchedule-
	I0805 23:13:10.255386   28839 start.go:319] duration metric: took 24.666330259s to joinCluster
	I0805 23:13:10.255462   28839 start.go:235] Will wait 6m0s for node &{Name:m03 IP:192.168.39.201 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0805 23:13:10.255896   28839 config.go:182] Loaded profile config "ha-044175": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0805 23:13:10.257355   28839 out.go:177] * Verifying Kubernetes components...
	I0805 23:13:10.258896   28839 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0805 23:13:10.552807   28839 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0805 23:13:10.578730   28839 loader.go:395] Config loaded from file:  /home/jenkins/minikube-integration/19373-9606/kubeconfig
	I0805 23:13:10.579104   28839 kapi.go:59] client config for ha-044175: &rest.Config{Host:"https://192.168.39.254:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/19373-9606/.minikube/profiles/ha-044175/client.crt", KeyFile:"/home/jenkins/minikube-integration/19373-9606/.minikube/profiles/ha-044175/client.key", CAFile:"/home/jenkins/minikube-integration/19373-9606/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(nil)},
UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1d02940), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	W0805 23:13:10.579208   28839 kubeadm.go:483] Overriding stale ClientConfig host https://192.168.39.254:8443 with https://192.168.39.57:8443
	I0805 23:13:10.579501   28839 node_ready.go:35] waiting up to 6m0s for node "ha-044175-m03" to be "Ready" ...
	I0805 23:13:10.579611   28839 round_trippers.go:463] GET https://192.168.39.57:8443/api/v1/nodes/ha-044175-m03
	I0805 23:13:10.579623   28839 round_trippers.go:469] Request Headers:
	I0805 23:13:10.579635   28839 round_trippers.go:473]     Accept: application/json, */*
	I0805 23:13:10.579644   28839 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0805 23:13:10.583792   28839 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0805 23:13:11.079657   28839 round_trippers.go:463] GET https://192.168.39.57:8443/api/v1/nodes/ha-044175-m03
	I0805 23:13:11.079680   28839 round_trippers.go:469] Request Headers:
	I0805 23:13:11.079689   28839 round_trippers.go:473]     Accept: application/json, */*
	I0805 23:13:11.079693   28839 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0805 23:13:11.083673   28839 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0805 23:13:11.579982   28839 round_trippers.go:463] GET https://192.168.39.57:8443/api/v1/nodes/ha-044175-m03
	I0805 23:13:11.580007   28839 round_trippers.go:469] Request Headers:
	I0805 23:13:11.580020   28839 round_trippers.go:473]     Accept: application/json, */*
	I0805 23:13:11.580026   28839 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0805 23:13:11.584352   28839 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0805 23:13:12.080378   28839 round_trippers.go:463] GET https://192.168.39.57:8443/api/v1/nodes/ha-044175-m03
	I0805 23:13:12.080406   28839 round_trippers.go:469] Request Headers:
	I0805 23:13:12.080419   28839 round_trippers.go:473]     Accept: application/json, */*
	I0805 23:13:12.080424   28839 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0805 23:13:12.084588   28839 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0805 23:13:12.580345   28839 round_trippers.go:463] GET https://192.168.39.57:8443/api/v1/nodes/ha-044175-m03
	I0805 23:13:12.580365   28839 round_trippers.go:469] Request Headers:
	I0805 23:13:12.580375   28839 round_trippers.go:473]     Accept: application/json, */*
	I0805 23:13:12.580382   28839 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0805 23:13:12.584471   28839 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0805 23:13:12.585148   28839 node_ready.go:53] node "ha-044175-m03" has status "Ready":"False"
	I0805 23:13:13.080341   28839 round_trippers.go:463] GET https://192.168.39.57:8443/api/v1/nodes/ha-044175-m03
	I0805 23:13:13.080360   28839 round_trippers.go:469] Request Headers:
	I0805 23:13:13.080370   28839 round_trippers.go:473]     Accept: application/json, */*
	I0805 23:13:13.080375   28839 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0805 23:13:13.083973   28839 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0805 23:13:13.579798   28839 round_trippers.go:463] GET https://192.168.39.57:8443/api/v1/nodes/ha-044175-m03
	I0805 23:13:13.579829   28839 round_trippers.go:469] Request Headers:
	I0805 23:13:13.579840   28839 round_trippers.go:473]     Accept: application/json, */*
	I0805 23:13:13.579847   28839 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0805 23:13:13.583870   28839 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0805 23:13:14.080325   28839 round_trippers.go:463] GET https://192.168.39.57:8443/api/v1/nodes/ha-044175-m03
	I0805 23:13:14.080352   28839 round_trippers.go:469] Request Headers:
	I0805 23:13:14.080363   28839 round_trippers.go:473]     Accept: application/json, */*
	I0805 23:13:14.080369   28839 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0805 23:13:14.084824   28839 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0805 23:13:14.579749   28839 round_trippers.go:463] GET https://192.168.39.57:8443/api/v1/nodes/ha-044175-m03
	I0805 23:13:14.579772   28839 round_trippers.go:469] Request Headers:
	I0805 23:13:14.579783   28839 round_trippers.go:473]     Accept: application/json, */*
	I0805 23:13:14.579791   28839 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0805 23:13:14.583727   28839 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0805 23:13:15.080276   28839 round_trippers.go:463] GET https://192.168.39.57:8443/api/v1/nodes/ha-044175-m03
	I0805 23:13:15.080302   28839 round_trippers.go:469] Request Headers:
	I0805 23:13:15.080312   28839 round_trippers.go:473]     Accept: application/json, */*
	I0805 23:13:15.080317   28839 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0805 23:13:15.084036   28839 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0805 23:13:15.084802   28839 node_ready.go:53] node "ha-044175-m03" has status "Ready":"False"
	I0805 23:13:15.580083   28839 round_trippers.go:463] GET https://192.168.39.57:8443/api/v1/nodes/ha-044175-m03
	I0805 23:13:15.580116   28839 round_trippers.go:469] Request Headers:
	I0805 23:13:15.580123   28839 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0805 23:13:15.580128   28839 round_trippers.go:473]     Accept: application/json, */*
	I0805 23:13:15.584100   28839 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0805 23:13:16.080115   28839 round_trippers.go:463] GET https://192.168.39.57:8443/api/v1/nodes/ha-044175-m03
	I0805 23:13:16.080141   28839 round_trippers.go:469] Request Headers:
	I0805 23:13:16.080154   28839 round_trippers.go:473]     Accept: application/json, */*
	I0805 23:13:16.080159   28839 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0805 23:13:16.083872   28839 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0805 23:13:16.580494   28839 round_trippers.go:463] GET https://192.168.39.57:8443/api/v1/nodes/ha-044175-m03
	I0805 23:13:16.580513   28839 round_trippers.go:469] Request Headers:
	I0805 23:13:16.580521   28839 round_trippers.go:473]     Accept: application/json, */*
	I0805 23:13:16.580525   28839 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0805 23:13:16.585175   28839 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0805 23:13:17.080287   28839 round_trippers.go:463] GET https://192.168.39.57:8443/api/v1/nodes/ha-044175-m03
	I0805 23:13:17.080310   28839 round_trippers.go:469] Request Headers:
	I0805 23:13:17.080322   28839 round_trippers.go:473]     Accept: application/json, */*
	I0805 23:13:17.080327   28839 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0805 23:13:17.085307   28839 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0805 23:13:17.086077   28839 node_ready.go:53] node "ha-044175-m03" has status "Ready":"False"
	I0805 23:13:17.580296   28839 round_trippers.go:463] GET https://192.168.39.57:8443/api/v1/nodes/ha-044175-m03
	I0805 23:13:17.580320   28839 round_trippers.go:469] Request Headers:
	I0805 23:13:17.580329   28839 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0805 23:13:17.580333   28839 round_trippers.go:473]     Accept: application/json, */*
	I0805 23:13:17.584316   28839 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0805 23:13:18.079775   28839 round_trippers.go:463] GET https://192.168.39.57:8443/api/v1/nodes/ha-044175-m03
	I0805 23:13:18.079799   28839 round_trippers.go:469] Request Headers:
	I0805 23:13:18.079811   28839 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0805 23:13:18.079815   28839 round_trippers.go:473]     Accept: application/json, */*
	I0805 23:13:18.084252   28839 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0805 23:13:18.580643   28839 round_trippers.go:463] GET https://192.168.39.57:8443/api/v1/nodes/ha-044175-m03
	I0805 23:13:18.580673   28839 round_trippers.go:469] Request Headers:
	I0805 23:13:18.580684   28839 round_trippers.go:473]     Accept: application/json, */*
	I0805 23:13:18.580689   28839 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0805 23:13:18.584405   28839 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0805 23:13:19.080269   28839 round_trippers.go:463] GET https://192.168.39.57:8443/api/v1/nodes/ha-044175-m03
	I0805 23:13:19.080290   28839 round_trippers.go:469] Request Headers:
	I0805 23:13:19.080299   28839 round_trippers.go:473]     Accept: application/json, */*
	I0805 23:13:19.080303   28839 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0805 23:13:19.084831   28839 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0805 23:13:19.579856   28839 round_trippers.go:463] GET https://192.168.39.57:8443/api/v1/nodes/ha-044175-m03
	I0805 23:13:19.579892   28839 round_trippers.go:469] Request Headers:
	I0805 23:13:19.579902   28839 round_trippers.go:473]     Accept: application/json, */*
	I0805 23:13:19.579907   28839 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0805 23:13:19.583303   28839 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0805 23:13:19.584078   28839 node_ready.go:53] node "ha-044175-m03" has status "Ready":"False"
	I0805 23:13:20.079828   28839 round_trippers.go:463] GET https://192.168.39.57:8443/api/v1/nodes/ha-044175-m03
	I0805 23:13:20.079864   28839 round_trippers.go:469] Request Headers:
	I0805 23:13:20.079889   28839 round_trippers.go:473]     Accept: application/json, */*
	I0805 23:13:20.079894   28839 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0805 23:13:20.084108   28839 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0805 23:13:20.579920   28839 round_trippers.go:463] GET https://192.168.39.57:8443/api/v1/nodes/ha-044175-m03
	I0805 23:13:20.579943   28839 round_trippers.go:469] Request Headers:
	I0805 23:13:20.579951   28839 round_trippers.go:473]     Accept: application/json, */*
	I0805 23:13:20.579957   28839 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0805 23:13:20.583983   28839 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0805 23:13:21.080098   28839 round_trippers.go:463] GET https://192.168.39.57:8443/api/v1/nodes/ha-044175-m03
	I0805 23:13:21.080119   28839 round_trippers.go:469] Request Headers:
	I0805 23:13:21.080127   28839 round_trippers.go:473]     Accept: application/json, */*
	I0805 23:13:21.080131   28839 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0805 23:13:21.084094   28839 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0805 23:13:21.580237   28839 round_trippers.go:463] GET https://192.168.39.57:8443/api/v1/nodes/ha-044175-m03
	I0805 23:13:21.580259   28839 round_trippers.go:469] Request Headers:
	I0805 23:13:21.580271   28839 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0805 23:13:21.580277   28839 round_trippers.go:473]     Accept: application/json, */*
	I0805 23:13:21.583732   28839 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0805 23:13:21.584540   28839 node_ready.go:53] node "ha-044175-m03" has status "Ready":"False"
	I0805 23:13:22.080311   28839 round_trippers.go:463] GET https://192.168.39.57:8443/api/v1/nodes/ha-044175-m03
	I0805 23:13:22.080333   28839 round_trippers.go:469] Request Headers:
	I0805 23:13:22.080341   28839 round_trippers.go:473]     Accept: application/json, */*
	I0805 23:13:22.080346   28839 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0805 23:13:22.083883   28839 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0805 23:13:22.579900   28839 round_trippers.go:463] GET https://192.168.39.57:8443/api/v1/nodes/ha-044175-m03
	I0805 23:13:22.579923   28839 round_trippers.go:469] Request Headers:
	I0805 23:13:22.579932   28839 round_trippers.go:473]     Accept: application/json, */*
	I0805 23:13:22.579937   28839 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0805 23:13:22.583407   28839 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0805 23:13:23.080584   28839 round_trippers.go:463] GET https://192.168.39.57:8443/api/v1/nodes/ha-044175-m03
	I0805 23:13:23.080607   28839 round_trippers.go:469] Request Headers:
	I0805 23:13:23.080619   28839 round_trippers.go:473]     Accept: application/json, */*
	I0805 23:13:23.080626   28839 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0805 23:13:23.084528   28839 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0805 23:13:23.579780   28839 round_trippers.go:463] GET https://192.168.39.57:8443/api/v1/nodes/ha-044175-m03
	I0805 23:13:23.579799   28839 round_trippers.go:469] Request Headers:
	I0805 23:13:23.579807   28839 round_trippers.go:473]     Accept: application/json, */*
	I0805 23:13:23.579810   28839 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0805 23:13:23.583231   28839 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0805 23:13:24.080315   28839 round_trippers.go:463] GET https://192.168.39.57:8443/api/v1/nodes/ha-044175-m03
	I0805 23:13:24.080341   28839 round_trippers.go:469] Request Headers:
	I0805 23:13:24.080352   28839 round_trippers.go:473]     Accept: application/json, */*
	I0805 23:13:24.080358   28839 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0805 23:13:24.083819   28839 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0805 23:13:24.084684   28839 node_ready.go:53] node "ha-044175-m03" has status "Ready":"False"
	I0805 23:13:24.579982   28839 round_trippers.go:463] GET https://192.168.39.57:8443/api/v1/nodes/ha-044175-m03
	I0805 23:13:24.580012   28839 round_trippers.go:469] Request Headers:
	I0805 23:13:24.580021   28839 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0805 23:13:24.580024   28839 round_trippers.go:473]     Accept: application/json, */*
	I0805 23:13:24.583939   28839 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0805 23:13:25.080552   28839 round_trippers.go:463] GET https://192.168.39.57:8443/api/v1/nodes/ha-044175-m03
	I0805 23:13:25.080578   28839 round_trippers.go:469] Request Headers:
	I0805 23:13:25.080589   28839 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0805 23:13:25.080595   28839 round_trippers.go:473]     Accept: application/json, */*
	I0805 23:13:25.084110   28839 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0805 23:13:25.580012   28839 round_trippers.go:463] GET https://192.168.39.57:8443/api/v1/nodes/ha-044175-m03
	I0805 23:13:25.580034   28839 round_trippers.go:469] Request Headers:
	I0805 23:13:25.580042   28839 round_trippers.go:473]     Accept: application/json, */*
	I0805 23:13:25.580047   28839 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0805 23:13:25.583938   28839 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0805 23:13:26.080137   28839 round_trippers.go:463] GET https://192.168.39.57:8443/api/v1/nodes/ha-044175-m03
	I0805 23:13:26.080160   28839 round_trippers.go:469] Request Headers:
	I0805 23:13:26.080168   28839 round_trippers.go:473]     Accept: application/json, */*
	I0805 23:13:26.080172   28839 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0805 23:13:26.083665   28839 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0805 23:13:26.580527   28839 round_trippers.go:463] GET https://192.168.39.57:8443/api/v1/nodes/ha-044175-m03
	I0805 23:13:26.580549   28839 round_trippers.go:469] Request Headers:
	I0805 23:13:26.580562   28839 round_trippers.go:473]     Accept: application/json, */*
	I0805 23:13:26.580570   28839 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0805 23:13:26.584322   28839 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0805 23:13:26.584974   28839 node_ready.go:53] node "ha-044175-m03" has status "Ready":"False"
	I0805 23:13:27.080291   28839 round_trippers.go:463] GET https://192.168.39.57:8443/api/v1/nodes/ha-044175-m03
	I0805 23:13:27.080312   28839 round_trippers.go:469] Request Headers:
	I0805 23:13:27.080320   28839 round_trippers.go:473]     Accept: application/json, */*
	I0805 23:13:27.080324   28839 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0805 23:13:27.083751   28839 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0805 23:13:27.579892   28839 round_trippers.go:463] GET https://192.168.39.57:8443/api/v1/nodes/ha-044175-m03
	I0805 23:13:27.579936   28839 round_trippers.go:469] Request Headers:
	I0805 23:13:27.579947   28839 round_trippers.go:473]     Accept: application/json, */*
	I0805 23:13:27.579955   28839 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0805 23:13:27.583568   28839 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0805 23:13:28.079763   28839 round_trippers.go:463] GET https://192.168.39.57:8443/api/v1/nodes/ha-044175-m03
	I0805 23:13:28.079787   28839 round_trippers.go:469] Request Headers:
	I0805 23:13:28.079799   28839 round_trippers.go:473]     Accept: application/json, */*
	I0805 23:13:28.079807   28839 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0805 23:13:28.083156   28839 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0805 23:13:28.580304   28839 round_trippers.go:463] GET https://192.168.39.57:8443/api/v1/nodes/ha-044175-m03
	I0805 23:13:28.580336   28839 round_trippers.go:469] Request Headers:
	I0805 23:13:28.580347   28839 round_trippers.go:473]     Accept: application/json, */*
	I0805 23:13:28.580353   28839 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0805 23:13:28.583927   28839 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0805 23:13:29.080135   28839 round_trippers.go:463] GET https://192.168.39.57:8443/api/v1/nodes/ha-044175-m03
	I0805 23:13:29.080165   28839 round_trippers.go:469] Request Headers:
	I0805 23:13:29.080177   28839 round_trippers.go:473]     Accept: application/json, */*
	I0805 23:13:29.080181   28839 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0805 23:13:29.083856   28839 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0805 23:13:29.084632   28839 node_ready.go:49] node "ha-044175-m03" has status "Ready":"True"
	I0805 23:13:29.084661   28839 node_ready.go:38] duration metric: took 18.505139296s for node "ha-044175-m03" to be "Ready" ...
	I0805 23:13:29.084670   28839 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0805 23:13:29.084724   28839 round_trippers.go:463] GET https://192.168.39.57:8443/api/v1/namespaces/kube-system/pods
	I0805 23:13:29.084733   28839 round_trippers.go:469] Request Headers:
	I0805 23:13:29.084740   28839 round_trippers.go:473]     Accept: application/json, */*
	I0805 23:13:29.084744   28839 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0805 23:13:29.092435   28839 round_trippers.go:574] Response Status: 200 OK in 7 milliseconds
	I0805 23:13:29.099556   28839 pod_ready.go:78] waiting up to 6m0s for pod "coredns-7db6d8ff4d-g9bml" in "kube-system" namespace to be "Ready" ...
	I0805 23:13:29.099630   28839 round_trippers.go:463] GET https://192.168.39.57:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-g9bml
	I0805 23:13:29.099636   28839 round_trippers.go:469] Request Headers:
	I0805 23:13:29.099643   28839 round_trippers.go:473]     Accept: application/json, */*
	I0805 23:13:29.099649   28839 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0805 23:13:29.102622   28839 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0805 23:13:29.103478   28839 round_trippers.go:463] GET https://192.168.39.57:8443/api/v1/nodes/ha-044175
	I0805 23:13:29.103497   28839 round_trippers.go:469] Request Headers:
	I0805 23:13:29.103507   28839 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0805 23:13:29.103513   28839 round_trippers.go:473]     Accept: application/json, */*
	I0805 23:13:29.106161   28839 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0805 23:13:29.106706   28839 pod_ready.go:92] pod "coredns-7db6d8ff4d-g9bml" in "kube-system" namespace has status "Ready":"True"
	I0805 23:13:29.106722   28839 pod_ready.go:81] duration metric: took 7.143366ms for pod "coredns-7db6d8ff4d-g9bml" in "kube-system" namespace to be "Ready" ...
	I0805 23:13:29.106731   28839 pod_ready.go:78] waiting up to 6m0s for pod "coredns-7db6d8ff4d-vzhst" in "kube-system" namespace to be "Ready" ...
	I0805 23:13:29.106779   28839 round_trippers.go:463] GET https://192.168.39.57:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-vzhst
	I0805 23:13:29.106786   28839 round_trippers.go:469] Request Headers:
	I0805 23:13:29.106793   28839 round_trippers.go:473]     Accept: application/json, */*
	I0805 23:13:29.106798   28839 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0805 23:13:29.109584   28839 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0805 23:13:29.110191   28839 round_trippers.go:463] GET https://192.168.39.57:8443/api/v1/nodes/ha-044175
	I0805 23:13:29.110204   28839 round_trippers.go:469] Request Headers:
	I0805 23:13:29.110210   28839 round_trippers.go:473]     Accept: application/json, */*
	I0805 23:13:29.110214   28839 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0805 23:13:29.112631   28839 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0805 23:13:29.113209   28839 pod_ready.go:92] pod "coredns-7db6d8ff4d-vzhst" in "kube-system" namespace has status "Ready":"True"
	I0805 23:13:29.113224   28839 pod_ready.go:81] duration metric: took 6.487633ms for pod "coredns-7db6d8ff4d-vzhst" in "kube-system" namespace to be "Ready" ...
	I0805 23:13:29.113232   28839 pod_ready.go:78] waiting up to 6m0s for pod "etcd-ha-044175" in "kube-system" namespace to be "Ready" ...
	I0805 23:13:29.113318   28839 round_trippers.go:463] GET https://192.168.39.57:8443/api/v1/namespaces/kube-system/pods/etcd-ha-044175
	I0805 23:13:29.113328   28839 round_trippers.go:469] Request Headers:
	I0805 23:13:29.113334   28839 round_trippers.go:473]     Accept: application/json, */*
	I0805 23:13:29.113339   28839 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0805 23:13:29.115566   28839 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0805 23:13:29.116073   28839 round_trippers.go:463] GET https://192.168.39.57:8443/api/v1/nodes/ha-044175
	I0805 23:13:29.116091   28839 round_trippers.go:469] Request Headers:
	I0805 23:13:29.116100   28839 round_trippers.go:473]     Accept: application/json, */*
	I0805 23:13:29.116107   28839 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0805 23:13:29.118160   28839 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0805 23:13:29.118547   28839 pod_ready.go:92] pod "etcd-ha-044175" in "kube-system" namespace has status "Ready":"True"
	I0805 23:13:29.118562   28839 pod_ready.go:81] duration metric: took 5.324674ms for pod "etcd-ha-044175" in "kube-system" namespace to be "Ready" ...
	I0805 23:13:29.118569   28839 pod_ready.go:78] waiting up to 6m0s for pod "etcd-ha-044175-m02" in "kube-system" namespace to be "Ready" ...
	I0805 23:13:29.118616   28839 round_trippers.go:463] GET https://192.168.39.57:8443/api/v1/namespaces/kube-system/pods/etcd-ha-044175-m02
	I0805 23:13:29.118624   28839 round_trippers.go:469] Request Headers:
	I0805 23:13:29.118630   28839 round_trippers.go:473]     Accept: application/json, */*
	I0805 23:13:29.118635   28839 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0805 23:13:29.120704   28839 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0805 23:13:29.121217   28839 round_trippers.go:463] GET https://192.168.39.57:8443/api/v1/nodes/ha-044175-m02
	I0805 23:13:29.121229   28839 round_trippers.go:469] Request Headers:
	I0805 23:13:29.121238   28839 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0805 23:13:29.121245   28839 round_trippers.go:473]     Accept: application/json, */*
	I0805 23:13:29.123792   28839 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0805 23:13:29.124416   28839 pod_ready.go:92] pod "etcd-ha-044175-m02" in "kube-system" namespace has status "Ready":"True"
	I0805 23:13:29.124436   28839 pod_ready.go:81] duration metric: took 5.859943ms for pod "etcd-ha-044175-m02" in "kube-system" namespace to be "Ready" ...
	I0805 23:13:29.124446   28839 pod_ready.go:78] waiting up to 6m0s for pod "etcd-ha-044175-m03" in "kube-system" namespace to be "Ready" ...
	I0805 23:13:29.280814   28839 request.go:629] Waited for 156.310918ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.57:8443/api/v1/namespaces/kube-system/pods/etcd-ha-044175-m03
	I0805 23:13:29.280906   28839 round_trippers.go:463] GET https://192.168.39.57:8443/api/v1/namespaces/kube-system/pods/etcd-ha-044175-m03
	I0805 23:13:29.280914   28839 round_trippers.go:469] Request Headers:
	I0805 23:13:29.280929   28839 round_trippers.go:473]     Accept: application/json, */*
	I0805 23:13:29.280937   28839 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0805 23:13:29.284543   28839 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0805 23:13:29.480962   28839 request.go:629] Waited for 195.348486ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.57:8443/api/v1/nodes/ha-044175-m03
	I0805 23:13:29.481052   28839 round_trippers.go:463] GET https://192.168.39.57:8443/api/v1/nodes/ha-044175-m03
	I0805 23:13:29.481063   28839 round_trippers.go:469] Request Headers:
	I0805 23:13:29.481073   28839 round_trippers.go:473]     Accept: application/json, */*
	I0805 23:13:29.481080   28839 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0805 23:13:29.484820   28839 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0805 23:13:29.485975   28839 pod_ready.go:92] pod "etcd-ha-044175-m03" in "kube-system" namespace has status "Ready":"True"
	I0805 23:13:29.485999   28839 pod_ready.go:81] duration metric: took 361.54109ms for pod "etcd-ha-044175-m03" in "kube-system" namespace to be "Ready" ...
	I0805 23:13:29.486022   28839 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-ha-044175" in "kube-system" namespace to be "Ready" ...
	I0805 23:13:29.680405   28839 request.go:629] Waited for 194.309033ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.57:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-044175
	I0805 23:13:29.680483   28839 round_trippers.go:463] GET https://192.168.39.57:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-044175
	I0805 23:13:29.680492   28839 round_trippers.go:469] Request Headers:
	I0805 23:13:29.680500   28839 round_trippers.go:473]     Accept: application/json, */*
	I0805 23:13:29.680504   28839 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0805 23:13:29.683658   28839 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0805 23:13:29.880892   28839 request.go:629] Waited for 196.365769ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.57:8443/api/v1/nodes/ha-044175
	I0805 23:13:29.880954   28839 round_trippers.go:463] GET https://192.168.39.57:8443/api/v1/nodes/ha-044175
	I0805 23:13:29.880959   28839 round_trippers.go:469] Request Headers:
	I0805 23:13:29.880966   28839 round_trippers.go:473]     Accept: application/json, */*
	I0805 23:13:29.880970   28839 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0805 23:13:29.884441   28839 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0805 23:13:29.885042   28839 pod_ready.go:92] pod "kube-apiserver-ha-044175" in "kube-system" namespace has status "Ready":"True"
	I0805 23:13:29.885058   28839 pod_ready.go:81] duration metric: took 399.024942ms for pod "kube-apiserver-ha-044175" in "kube-system" namespace to be "Ready" ...
	I0805 23:13:29.885068   28839 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-ha-044175-m02" in "kube-system" namespace to be "Ready" ...
	I0805 23:13:30.080128   28839 request.go:629] Waited for 194.999097ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.57:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-044175-m02
	I0805 23:13:30.080227   28839 round_trippers.go:463] GET https://192.168.39.57:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-044175-m02
	I0805 23:13:30.080238   28839 round_trippers.go:469] Request Headers:
	I0805 23:13:30.080250   28839 round_trippers.go:473]     Accept: application/json, */*
	I0805 23:13:30.080257   28839 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0805 23:13:30.083834   28839 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0805 23:13:30.281049   28839 request.go:629] Waited for 196.344278ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.57:8443/api/v1/nodes/ha-044175-m02
	I0805 23:13:30.281138   28839 round_trippers.go:463] GET https://192.168.39.57:8443/api/v1/nodes/ha-044175-m02
	I0805 23:13:30.281144   28839 round_trippers.go:469] Request Headers:
	I0805 23:13:30.281152   28839 round_trippers.go:473]     Accept: application/json, */*
	I0805 23:13:30.281158   28839 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0805 23:13:30.284838   28839 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0805 23:13:30.285672   28839 pod_ready.go:92] pod "kube-apiserver-ha-044175-m02" in "kube-system" namespace has status "Ready":"True"
	I0805 23:13:30.285699   28839 pod_ready.go:81] duration metric: took 400.624511ms for pod "kube-apiserver-ha-044175-m02" in "kube-system" namespace to be "Ready" ...
	I0805 23:13:30.285730   28839 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-ha-044175-m03" in "kube-system" namespace to be "Ready" ...
	I0805 23:13:30.480762   28839 request.go:629] Waited for 194.951381ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.57:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-044175-m03
	I0805 23:13:30.480873   28839 round_trippers.go:463] GET https://192.168.39.57:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-044175-m03
	I0805 23:13:30.480888   28839 round_trippers.go:469] Request Headers:
	I0805 23:13:30.480898   28839 round_trippers.go:473]     Accept: application/json, */*
	I0805 23:13:30.480904   28839 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0805 23:13:30.484624   28839 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0805 23:13:30.680546   28839 request.go:629] Waited for 195.355261ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.57:8443/api/v1/nodes/ha-044175-m03
	I0805 23:13:30.680624   28839 round_trippers.go:463] GET https://192.168.39.57:8443/api/v1/nodes/ha-044175-m03
	I0805 23:13:30.680635   28839 round_trippers.go:469] Request Headers:
	I0805 23:13:30.680649   28839 round_trippers.go:473]     Accept: application/json, */*
	I0805 23:13:30.680658   28839 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0805 23:13:30.684292   28839 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0805 23:13:30.685008   28839 pod_ready.go:92] pod "kube-apiserver-ha-044175-m03" in "kube-system" namespace has status "Ready":"True"
	I0805 23:13:30.685029   28839 pod_ready.go:81] duration metric: took 399.28781ms for pod "kube-apiserver-ha-044175-m03" in "kube-system" namespace to be "Ready" ...
	I0805 23:13:30.685040   28839 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-ha-044175" in "kube-system" namespace to be "Ready" ...
	I0805 23:13:30.880841   28839 request.go:629] Waited for 195.731489ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.57:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-044175
	I0805 23:13:30.880902   28839 round_trippers.go:463] GET https://192.168.39.57:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-044175
	I0805 23:13:30.880907   28839 round_trippers.go:469] Request Headers:
	I0805 23:13:30.880914   28839 round_trippers.go:473]     Accept: application/json, */*
	I0805 23:13:30.880918   28839 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0805 23:13:30.884894   28839 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0805 23:13:31.080949   28839 request.go:629] Waited for 195.363946ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.57:8443/api/v1/nodes/ha-044175
	I0805 23:13:31.081024   28839 round_trippers.go:463] GET https://192.168.39.57:8443/api/v1/nodes/ha-044175
	I0805 23:13:31.081029   28839 round_trippers.go:469] Request Headers:
	I0805 23:13:31.081036   28839 round_trippers.go:473]     Accept: application/json, */*
	I0805 23:13:31.081042   28839 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0805 23:13:31.084929   28839 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0805 23:13:31.086329   28839 pod_ready.go:92] pod "kube-controller-manager-ha-044175" in "kube-system" namespace has status "Ready":"True"
	I0805 23:13:31.086354   28839 pod_ready.go:81] duration metric: took 401.306409ms for pod "kube-controller-manager-ha-044175" in "kube-system" namespace to be "Ready" ...
	I0805 23:13:31.086365   28839 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-ha-044175-m02" in "kube-system" namespace to be "Ready" ...
	I0805 23:13:31.280695   28839 request.go:629] Waited for 194.261394ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.57:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-044175-m02
	I0805 23:13:31.280765   28839 round_trippers.go:463] GET https://192.168.39.57:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-044175-m02
	I0805 23:13:31.280773   28839 round_trippers.go:469] Request Headers:
	I0805 23:13:31.280783   28839 round_trippers.go:473]     Accept: application/json, */*
	I0805 23:13:31.280789   28839 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0805 23:13:31.284270   28839 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0805 23:13:31.480709   28839 request.go:629] Waited for 195.366262ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.57:8443/api/v1/nodes/ha-044175-m02
	I0805 23:13:31.480767   28839 round_trippers.go:463] GET https://192.168.39.57:8443/api/v1/nodes/ha-044175-m02
	I0805 23:13:31.480783   28839 round_trippers.go:469] Request Headers:
	I0805 23:13:31.480791   28839 round_trippers.go:473]     Accept: application/json, */*
	I0805 23:13:31.480799   28839 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0805 23:13:31.484380   28839 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0805 23:13:31.484963   28839 pod_ready.go:92] pod "kube-controller-manager-ha-044175-m02" in "kube-system" namespace has status "Ready":"True"
	I0805 23:13:31.484983   28839 pod_ready.go:81] duration metric: took 398.611698ms for pod "kube-controller-manager-ha-044175-m02" in "kube-system" namespace to be "Ready" ...
	I0805 23:13:31.484996   28839 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-ha-044175-m03" in "kube-system" namespace to be "Ready" ...
	I0805 23:13:31.680790   28839 request.go:629] Waited for 195.72273ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.57:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-044175-m03
	I0805 23:13:31.680880   28839 round_trippers.go:463] GET https://192.168.39.57:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-044175-m03
	I0805 23:13:31.680888   28839 round_trippers.go:469] Request Headers:
	I0805 23:13:31.680896   28839 round_trippers.go:473]     Accept: application/json, */*
	I0805 23:13:31.680900   28839 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0805 23:13:31.684619   28839 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0805 23:13:31.881003   28839 request.go:629] Waited for 195.355315ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.57:8443/api/v1/nodes/ha-044175-m03
	I0805 23:13:31.881055   28839 round_trippers.go:463] GET https://192.168.39.57:8443/api/v1/nodes/ha-044175-m03
	I0805 23:13:31.881060   28839 round_trippers.go:469] Request Headers:
	I0805 23:13:31.881070   28839 round_trippers.go:473]     Accept: application/json, */*
	I0805 23:13:31.881076   28839 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0805 23:13:31.884305   28839 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0805 23:13:31.884834   28839 pod_ready.go:92] pod "kube-controller-manager-ha-044175-m03" in "kube-system" namespace has status "Ready":"True"
	I0805 23:13:31.884856   28839 pod_ready.go:81] duration metric: took 399.851377ms for pod "kube-controller-manager-ha-044175-m03" in "kube-system" namespace to be "Ready" ...
	I0805 23:13:31.884869   28839 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-4ql5l" in "kube-system" namespace to be "Ready" ...
	I0805 23:13:32.081052   28839 request.go:629] Waited for 196.099083ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.57:8443/api/v1/namespaces/kube-system/pods/kube-proxy-4ql5l
	I0805 23:13:32.081124   28839 round_trippers.go:463] GET https://192.168.39.57:8443/api/v1/namespaces/kube-system/pods/kube-proxy-4ql5l
	I0805 23:13:32.081133   28839 round_trippers.go:469] Request Headers:
	I0805 23:13:32.081143   28839 round_trippers.go:473]     Accept: application/json, */*
	I0805 23:13:32.081152   28839 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0805 23:13:32.084619   28839 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0805 23:13:32.280904   28839 request.go:629] Waited for 195.368598ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.57:8443/api/v1/nodes/ha-044175-m03
	I0805 23:13:32.280957   28839 round_trippers.go:463] GET https://192.168.39.57:8443/api/v1/nodes/ha-044175-m03
	I0805 23:13:32.280967   28839 round_trippers.go:469] Request Headers:
	I0805 23:13:32.280986   28839 round_trippers.go:473]     Accept: application/json, */*
	I0805 23:13:32.280993   28839 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0805 23:13:32.284909   28839 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0805 23:13:32.285660   28839 pod_ready.go:92] pod "kube-proxy-4ql5l" in "kube-system" namespace has status "Ready":"True"
	I0805 23:13:32.285682   28839 pod_ready.go:81] duration metric: took 400.797372ms for pod "kube-proxy-4ql5l" in "kube-system" namespace to be "Ready" ...
	I0805 23:13:32.285696   28839 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-jfs9q" in "kube-system" namespace to be "Ready" ...
	I0805 23:13:32.480848   28839 request.go:629] Waited for 195.083319ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.57:8443/api/v1/namespaces/kube-system/pods/kube-proxy-jfs9q
	I0805 23:13:32.480926   28839 round_trippers.go:463] GET https://192.168.39.57:8443/api/v1/namespaces/kube-system/pods/kube-proxy-jfs9q
	I0805 23:13:32.480935   28839 round_trippers.go:469] Request Headers:
	I0805 23:13:32.480944   28839 round_trippers.go:473]     Accept: application/json, */*
	I0805 23:13:32.481009   28839 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0805 23:13:32.484539   28839 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0805 23:13:32.680586   28839 request.go:629] Waited for 195.338964ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.57:8443/api/v1/nodes/ha-044175-m02
	I0805 23:13:32.680659   28839 round_trippers.go:463] GET https://192.168.39.57:8443/api/v1/nodes/ha-044175-m02
	I0805 23:13:32.680667   28839 round_trippers.go:469] Request Headers:
	I0805 23:13:32.680678   28839 round_trippers.go:473]     Accept: application/json, */*
	I0805 23:13:32.680683   28839 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0805 23:13:32.684223   28839 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0805 23:13:32.684937   28839 pod_ready.go:92] pod "kube-proxy-jfs9q" in "kube-system" namespace has status "Ready":"True"
	I0805 23:13:32.684957   28839 pod_ready.go:81] duration metric: took 399.252196ms for pod "kube-proxy-jfs9q" in "kube-system" namespace to be "Ready" ...
	I0805 23:13:32.684972   28839 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-vj5sd" in "kube-system" namespace to be "Ready" ...
	I0805 23:13:32.881019   28839 request.go:629] Waited for 195.972084ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.57:8443/api/v1/namespaces/kube-system/pods/kube-proxy-vj5sd
	I0805 23:13:32.881108   28839 round_trippers.go:463] GET https://192.168.39.57:8443/api/v1/namespaces/kube-system/pods/kube-proxy-vj5sd
	I0805 23:13:32.881119   28839 round_trippers.go:469] Request Headers:
	I0805 23:13:32.881130   28839 round_trippers.go:473]     Accept: application/json, */*
	I0805 23:13:32.881140   28839 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0805 23:13:32.884904   28839 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0805 23:13:33.080296   28839 request.go:629] Waited for 194.285753ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.57:8443/api/v1/nodes/ha-044175
	I0805 23:13:33.080383   28839 round_trippers.go:463] GET https://192.168.39.57:8443/api/v1/nodes/ha-044175
	I0805 23:13:33.080389   28839 round_trippers.go:469] Request Headers:
	I0805 23:13:33.080397   28839 round_trippers.go:473]     Accept: application/json, */*
	I0805 23:13:33.080404   28839 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0805 23:13:33.083997   28839 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0805 23:13:33.084894   28839 pod_ready.go:92] pod "kube-proxy-vj5sd" in "kube-system" namespace has status "Ready":"True"
	I0805 23:13:33.084914   28839 pod_ready.go:81] duration metric: took 399.929086ms for pod "kube-proxy-vj5sd" in "kube-system" namespace to be "Ready" ...
	I0805 23:13:33.084923   28839 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-ha-044175" in "kube-system" namespace to be "Ready" ...
	I0805 23:13:33.281061   28839 request.go:629] Waited for 196.079497ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.57:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-044175
	I0805 23:13:33.281153   28839 round_trippers.go:463] GET https://192.168.39.57:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-044175
	I0805 23:13:33.281161   28839 round_trippers.go:469] Request Headers:
	I0805 23:13:33.281170   28839 round_trippers.go:473]     Accept: application/json, */*
	I0805 23:13:33.281175   28839 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0805 23:13:33.284680   28839 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0805 23:13:33.480553   28839 request.go:629] Waited for 195.084005ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.57:8443/api/v1/nodes/ha-044175
	I0805 23:13:33.480611   28839 round_trippers.go:463] GET https://192.168.39.57:8443/api/v1/nodes/ha-044175
	I0805 23:13:33.480616   28839 round_trippers.go:469] Request Headers:
	I0805 23:13:33.480624   28839 round_trippers.go:473]     Accept: application/json, */*
	I0805 23:13:33.480628   28839 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0805 23:13:33.484136   28839 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0805 23:13:33.484921   28839 pod_ready.go:92] pod "kube-scheduler-ha-044175" in "kube-system" namespace has status "Ready":"True"
	I0805 23:13:33.484940   28839 pod_ready.go:81] duration metric: took 400.010367ms for pod "kube-scheduler-ha-044175" in "kube-system" namespace to be "Ready" ...
	I0805 23:13:33.484952   28839 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-ha-044175-m02" in "kube-system" namespace to be "Ready" ...
	I0805 23:13:33.681072   28839 request.go:629] Waited for 196.05614ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.57:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-044175-m02
	I0805 23:13:33.681148   28839 round_trippers.go:463] GET https://192.168.39.57:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-044175-m02
	I0805 23:13:33.681155   28839 round_trippers.go:469] Request Headers:
	I0805 23:13:33.681166   28839 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0805 23:13:33.681173   28839 round_trippers.go:473]     Accept: application/json, */*
	I0805 23:13:33.684559   28839 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0805 23:13:33.880566   28839 request.go:629] Waited for 195.388243ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.57:8443/api/v1/nodes/ha-044175-m02
	I0805 23:13:33.880634   28839 round_trippers.go:463] GET https://192.168.39.57:8443/api/v1/nodes/ha-044175-m02
	I0805 23:13:33.880641   28839 round_trippers.go:469] Request Headers:
	I0805 23:13:33.880649   28839 round_trippers.go:473]     Accept: application/json, */*
	I0805 23:13:33.880658   28839 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0805 23:13:33.885130   28839 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0805 23:13:33.886696   28839 pod_ready.go:92] pod "kube-scheduler-ha-044175-m02" in "kube-system" namespace has status "Ready":"True"
	I0805 23:13:33.886723   28839 pod_ready.go:81] duration metric: took 401.762075ms for pod "kube-scheduler-ha-044175-m02" in "kube-system" namespace to be "Ready" ...
	I0805 23:13:33.886737   28839 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-ha-044175-m03" in "kube-system" namespace to be "Ready" ...
	I0805 23:13:34.080694   28839 request.go:629] Waited for 193.885489ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.57:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-044175-m03
	I0805 23:13:34.080770   28839 round_trippers.go:463] GET https://192.168.39.57:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-044175-m03
	I0805 23:13:34.080778   28839 round_trippers.go:469] Request Headers:
	I0805 23:13:34.080786   28839 round_trippers.go:473]     Accept: application/json, */*
	I0805 23:13:34.080790   28839 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0805 23:13:34.084489   28839 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0805 23:13:34.280530   28839 request.go:629] Waited for 195.363035ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.57:8443/api/v1/nodes/ha-044175-m03
	I0805 23:13:34.280583   28839 round_trippers.go:463] GET https://192.168.39.57:8443/api/v1/nodes/ha-044175-m03
	I0805 23:13:34.280587   28839 round_trippers.go:469] Request Headers:
	I0805 23:13:34.280595   28839 round_trippers.go:473]     Accept: application/json, */*
	I0805 23:13:34.280603   28839 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0805 23:13:34.284457   28839 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0805 23:13:34.285127   28839 pod_ready.go:92] pod "kube-scheduler-ha-044175-m03" in "kube-system" namespace has status "Ready":"True"
	I0805 23:13:34.285145   28839 pod_ready.go:81] duration metric: took 398.400816ms for pod "kube-scheduler-ha-044175-m03" in "kube-system" namespace to be "Ready" ...
	I0805 23:13:34.285156   28839 pod_ready.go:38] duration metric: took 5.200477021s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0805 23:13:34.285170   28839 api_server.go:52] waiting for apiserver process to appear ...
	I0805 23:13:34.285218   28839 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0805 23:13:34.301918   28839 api_server.go:72] duration metric: took 24.046418005s to wait for apiserver process to appear ...
	I0805 23:13:34.301950   28839 api_server.go:88] waiting for apiserver healthz status ...
	I0805 23:13:34.301973   28839 api_server.go:253] Checking apiserver healthz at https://192.168.39.57:8443/healthz ...
	I0805 23:13:34.309670   28839 api_server.go:279] https://192.168.39.57:8443/healthz returned 200:
	ok
	I0805 23:13:34.309729   28839 round_trippers.go:463] GET https://192.168.39.57:8443/version
	I0805 23:13:34.309736   28839 round_trippers.go:469] Request Headers:
	I0805 23:13:34.309744   28839 round_trippers.go:473]     Accept: application/json, */*
	I0805 23:13:34.309752   28839 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0805 23:13:34.310981   28839 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0805 23:13:34.311038   28839 api_server.go:141] control plane version: v1.30.3
	I0805 23:13:34.311074   28839 api_server.go:131] duration metric: took 9.116905ms to wait for apiserver health ...
	I0805 23:13:34.311088   28839 system_pods.go:43] waiting for kube-system pods to appear ...
	I0805 23:13:34.480516   28839 request.go:629] Waited for 169.354206ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.57:8443/api/v1/namespaces/kube-system/pods
	I0805 23:13:34.480585   28839 round_trippers.go:463] GET https://192.168.39.57:8443/api/v1/namespaces/kube-system/pods
	I0805 23:13:34.480593   28839 round_trippers.go:469] Request Headers:
	I0805 23:13:34.480603   28839 round_trippers.go:473]     Accept: application/json, */*
	I0805 23:13:34.480614   28839 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0805 23:13:34.488406   28839 round_trippers.go:574] Response Status: 200 OK in 7 milliseconds
	I0805 23:13:34.494649   28839 system_pods.go:59] 24 kube-system pods found
	I0805 23:13:34.494678   28839 system_pods.go:61] "coredns-7db6d8ff4d-g9bml" [fd474413-e416-48db-a7bf-f3c40675819b] Running
	I0805 23:13:34.494682   28839 system_pods.go:61] "coredns-7db6d8ff4d-vzhst" [f9c09745-be29-4403-9e7d-f9e4eaae5cac] Running
	I0805 23:13:34.494688   28839 system_pods.go:61] "etcd-ha-044175" [f9008d52-5a0c-4a6b-9cdf-7df18dd78752] Running
	I0805 23:13:34.494692   28839 system_pods.go:61] "etcd-ha-044175-m02" [773f42be-f8b5-47f0-bcd0-36bd6ae24bab] Running
	I0805 23:13:34.494695   28839 system_pods.go:61] "etcd-ha-044175-m03" [5704b0d2-6558-4321-9443-e4c7827bbd39] Running
	I0805 23:13:34.494698   28839 system_pods.go:61] "kindnet-hqhgc" [de6b28dc-79ea-43af-868e-e32180dcd5f2] Running
	I0805 23:13:34.494701   28839 system_pods.go:61] "kindnet-mc7wf" [c0635f1a-e26d-47b6-98f3-675d6e0b8acc] Running
	I0805 23:13:34.494705   28839 system_pods.go:61] "kindnet-xqx4z" [8455705e-b140-4f1e-abff-6a71bbb5415f] Running
	I0805 23:13:34.494708   28839 system_pods.go:61] "kube-apiserver-ha-044175" [4e39654d-531d-4cf4-b4a9-beeada8e8d05] Running
	I0805 23:13:34.494711   28839 system_pods.go:61] "kube-apiserver-ha-044175-m02" [06dfad00-f627-43cd-abea-c3a34d423964] Running
	I0805 23:13:34.494714   28839 system_pods.go:61] "kube-apiserver-ha-044175-m03" [d448c79d-6668-4d54-9814-2dac3eb5162d] Running
	I0805 23:13:34.494717   28839 system_pods.go:61] "kube-controller-manager-ha-044175" [d6f6d163-103f-4af4-976f-c255d1933bb2] Running
	I0805 23:13:34.494720   28839 system_pods.go:61] "kube-controller-manager-ha-044175-m02" [1bf050d3-1969-4ca1-89d3-f729989fd6b8] Running
	I0805 23:13:34.494723   28839 system_pods.go:61] "kube-controller-manager-ha-044175-m03" [ad0efa73-21d4-43e6-b1bd-9320ffd77f38] Running
	I0805 23:13:34.494726   28839 system_pods.go:61] "kube-proxy-4ql5l" [cf451989-77fc-462d-9826-54eeca4047e8] Running
	I0805 23:13:34.494729   28839 system_pods.go:61] "kube-proxy-jfs9q" [d8d0b4df-e1e1-4354-ba55-594dec7d1e89] Running
	I0805 23:13:34.494732   28839 system_pods.go:61] "kube-proxy-vj5sd" [d6c9cdcb-e1b7-44c8-a6e3-5e5aeb76ba03] Running
	I0805 23:13:34.494740   28839 system_pods.go:61] "kube-scheduler-ha-044175" [41c96a32-1b26-4e05-a21a-48c4fd913b9f] Running
	I0805 23:13:34.494742   28839 system_pods.go:61] "kube-scheduler-ha-044175-m02" [8e41f86c-0b86-40be-a524-fbae6283693d] Running
	I0805 23:13:34.494745   28839 system_pods.go:61] "kube-scheduler-ha-044175-m03" [e9faa567-8329-4fc5-a135-2851a03672a6] Running
	I0805 23:13:34.494748   28839 system_pods.go:61] "kube-vip-ha-044175" [505ff885-b8a0-48bd-8d1e-81e4583b48af] Running
	I0805 23:13:34.494753   28839 system_pods.go:61] "kube-vip-ha-044175-m02" [ffbecaef-6482-4c4e-8268-4b66e4799be5] Running
	I0805 23:13:34.494756   28839 system_pods.go:61] "kube-vip-ha-044175-m03" [6defc4ea-8441-46e2-ac1a-0ab55290431c] Running
	I0805 23:13:34.494758   28839 system_pods.go:61] "storage-provisioner" [d30d1a5b-cfbe-4de6-a964-75c32e5dbf62] Running
	I0805 23:13:34.494764   28839 system_pods.go:74] duration metric: took 183.668198ms to wait for pod list to return data ...
	I0805 23:13:34.494774   28839 default_sa.go:34] waiting for default service account to be created ...
	I0805 23:13:34.680796   28839 request.go:629] Waited for 185.959448ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.57:8443/api/v1/namespaces/default/serviceaccounts
	I0805 23:13:34.680853   28839 round_trippers.go:463] GET https://192.168.39.57:8443/api/v1/namespaces/default/serviceaccounts
	I0805 23:13:34.680858   28839 round_trippers.go:469] Request Headers:
	I0805 23:13:34.680865   28839 round_trippers.go:473]     Accept: application/json, */*
	I0805 23:13:34.680868   28839 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0805 23:13:34.684549   28839 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0805 23:13:34.684672   28839 default_sa.go:45] found service account: "default"
	I0805 23:13:34.684685   28839 default_sa.go:55] duration metric: took 189.905927ms for default service account to be created ...
	I0805 23:13:34.684694   28839 system_pods.go:116] waiting for k8s-apps to be running ...
	I0805 23:13:34.881112   28839 request.go:629] Waited for 196.358612ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.57:8443/api/v1/namespaces/kube-system/pods
	I0805 23:13:34.881179   28839 round_trippers.go:463] GET https://192.168.39.57:8443/api/v1/namespaces/kube-system/pods
	I0805 23:13:34.881186   28839 round_trippers.go:469] Request Headers:
	I0805 23:13:34.881196   28839 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0805 23:13:34.881202   28839 round_trippers.go:473]     Accept: application/json, */*
	I0805 23:13:34.888776   28839 round_trippers.go:574] Response Status: 200 OK in 7 milliseconds
	I0805 23:13:34.895122   28839 system_pods.go:86] 24 kube-system pods found
	I0805 23:13:34.895149   28839 system_pods.go:89] "coredns-7db6d8ff4d-g9bml" [fd474413-e416-48db-a7bf-f3c40675819b] Running
	I0805 23:13:34.895155   28839 system_pods.go:89] "coredns-7db6d8ff4d-vzhst" [f9c09745-be29-4403-9e7d-f9e4eaae5cac] Running
	I0805 23:13:34.895159   28839 system_pods.go:89] "etcd-ha-044175" [f9008d52-5a0c-4a6b-9cdf-7df18dd78752] Running
	I0805 23:13:34.895163   28839 system_pods.go:89] "etcd-ha-044175-m02" [773f42be-f8b5-47f0-bcd0-36bd6ae24bab] Running
	I0805 23:13:34.895167   28839 system_pods.go:89] "etcd-ha-044175-m03" [5704b0d2-6558-4321-9443-e4c7827bbd39] Running
	I0805 23:13:34.895171   28839 system_pods.go:89] "kindnet-hqhgc" [de6b28dc-79ea-43af-868e-e32180dcd5f2] Running
	I0805 23:13:34.895175   28839 system_pods.go:89] "kindnet-mc7wf" [c0635f1a-e26d-47b6-98f3-675d6e0b8acc] Running
	I0805 23:13:34.895179   28839 system_pods.go:89] "kindnet-xqx4z" [8455705e-b140-4f1e-abff-6a71bbb5415f] Running
	I0805 23:13:34.895183   28839 system_pods.go:89] "kube-apiserver-ha-044175" [4e39654d-531d-4cf4-b4a9-beeada8e8d05] Running
	I0805 23:13:34.895188   28839 system_pods.go:89] "kube-apiserver-ha-044175-m02" [06dfad00-f627-43cd-abea-c3a34d423964] Running
	I0805 23:13:34.895192   28839 system_pods.go:89] "kube-apiserver-ha-044175-m03" [d448c79d-6668-4d54-9814-2dac3eb5162d] Running
	I0805 23:13:34.895196   28839 system_pods.go:89] "kube-controller-manager-ha-044175" [d6f6d163-103f-4af4-976f-c255d1933bb2] Running
	I0805 23:13:34.895200   28839 system_pods.go:89] "kube-controller-manager-ha-044175-m02" [1bf050d3-1969-4ca1-89d3-f729989fd6b8] Running
	I0805 23:13:34.895204   28839 system_pods.go:89] "kube-controller-manager-ha-044175-m03" [ad0efa73-21d4-43e6-b1bd-9320ffd77f38] Running
	I0805 23:13:34.895209   28839 system_pods.go:89] "kube-proxy-4ql5l" [cf451989-77fc-462d-9826-54eeca4047e8] Running
	I0805 23:13:34.895213   28839 system_pods.go:89] "kube-proxy-jfs9q" [d8d0b4df-e1e1-4354-ba55-594dec7d1e89] Running
	I0805 23:13:34.895218   28839 system_pods.go:89] "kube-proxy-vj5sd" [d6c9cdcb-e1b7-44c8-a6e3-5e5aeb76ba03] Running
	I0805 23:13:34.895222   28839 system_pods.go:89] "kube-scheduler-ha-044175" [41c96a32-1b26-4e05-a21a-48c4fd913b9f] Running
	I0805 23:13:34.895228   28839 system_pods.go:89] "kube-scheduler-ha-044175-m02" [8e41f86c-0b86-40be-a524-fbae6283693d] Running
	I0805 23:13:34.895231   28839 system_pods.go:89] "kube-scheduler-ha-044175-m03" [e9faa567-8329-4fc5-a135-2851a03672a6] Running
	I0805 23:13:34.895237   28839 system_pods.go:89] "kube-vip-ha-044175" [505ff885-b8a0-48bd-8d1e-81e4583b48af] Running
	I0805 23:13:34.895241   28839 system_pods.go:89] "kube-vip-ha-044175-m02" [ffbecaef-6482-4c4e-8268-4b66e4799be5] Running
	I0805 23:13:34.895247   28839 system_pods.go:89] "kube-vip-ha-044175-m03" [6defc4ea-8441-46e2-ac1a-0ab55290431c] Running
	I0805 23:13:34.895250   28839 system_pods.go:89] "storage-provisioner" [d30d1a5b-cfbe-4de6-a964-75c32e5dbf62] Running
	I0805 23:13:34.895256   28839 system_pods.go:126] duration metric: took 210.557395ms to wait for k8s-apps to be running ...
	I0805 23:13:34.895264   28839 system_svc.go:44] waiting for kubelet service to be running ....
	I0805 23:13:34.895308   28839 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0805 23:13:34.911311   28839 system_svc.go:56] duration metric: took 16.041336ms WaitForService to wait for kubelet
	I0805 23:13:34.911336   28839 kubeadm.go:582] duration metric: took 24.655841277s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0805 23:13:34.911355   28839 node_conditions.go:102] verifying NodePressure condition ...
	I0805 23:13:35.080788   28839 request.go:629] Waited for 169.357422ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.57:8443/api/v1/nodes
	I0805 23:13:35.080855   28839 round_trippers.go:463] GET https://192.168.39.57:8443/api/v1/nodes
	I0805 23:13:35.080893   28839 round_trippers.go:469] Request Headers:
	I0805 23:13:35.080916   28839 round_trippers.go:473]     Accept: application/json, */*
	I0805 23:13:35.080929   28839 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0805 23:13:35.084961   28839 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0805 23:13:35.086423   28839 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0805 23:13:35.086445   28839 node_conditions.go:123] node cpu capacity is 2
	I0805 23:13:35.086461   28839 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0805 23:13:35.086467   28839 node_conditions.go:123] node cpu capacity is 2
	I0805 23:13:35.086474   28839 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0805 23:13:35.086481   28839 node_conditions.go:123] node cpu capacity is 2
	I0805 23:13:35.086488   28839 node_conditions.go:105] duration metric: took 175.127143ms to run NodePressure ...
	I0805 23:13:35.086506   28839 start.go:241] waiting for startup goroutines ...
	I0805 23:13:35.086533   28839 start.go:255] writing updated cluster config ...
	I0805 23:13:35.086868   28839 ssh_runner.go:195] Run: rm -f paused
	I0805 23:13:35.138880   28839 start.go:600] kubectl: 1.30.3, cluster: 1.30.3 (minor skew: 0)
	I0805 23:13:35.140884   28839 out.go:177] * Done! kubectl is now configured to use "ha-044175" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	Aug 05 23:18:07 ha-044175 crio[684]: time="2024-08-05 23:18:07.247504479Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1722899887247481044,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:154770,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=4163bfa1-b6a2-4a80-8fd2-c1a6591f7adf name=/runtime.v1.ImageService/ImageFsInfo
	Aug 05 23:18:07 ha-044175 crio[684]: time="2024-08-05 23:18:07.248099197Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=d70b3491-430d-4951-8498-e6580e3db990 name=/runtime.v1.RuntimeService/ListContainers
	Aug 05 23:18:07 ha-044175 crio[684]: time="2024-08-05 23:18:07.248167920Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=d70b3491-430d-4951-8498-e6580e3db990 name=/runtime.v1.RuntimeService/ListContainers
	Aug 05 23:18:07 ha-044175 crio[684]: time="2024-08-05 23:18:07.248444876Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:14f7140ac408890dd788c7a9d6a9857531edad86ff751157ac035e6ab0d4afdc,PodSandboxId:1bf94d816bd6b0f9325f20c0b2453330291a5dfa79448419ddd925a97f951bb9,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1722899618925179407,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-fc5497c4f-wmfql,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: bfc8bad7-d43d-4beb-991e-339a4ce96ab5,},Annotations:map[string]string{io.kubernetes.container.hash: fc00d50e,io.kubernetes.container.restartCount: 0,io.kubernetes.c
ontainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5e8f17a7a758ce7d69c780273e3653b03bc4c01767911d236cad9862a3337e50,PodSandboxId:5d4208cbe441324fb59633dbd487e1e04ee180f1f9763a207a4979e68a4ab71e,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1722899473852759909,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d30d1a5b-cfbe-4de6-a964-75c32e5dbf62,},Annotations:map[string]string{io.kubernetes.container.hash: 4378961a,io.kubernetes.container.restartCount: 0,io.kubernetes.container.ter
minationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4617bbebfc992da16ee550b4c2c74a6d4c58299fe2518f6d24c3a10b1e02c941,PodSandboxId:449b4adbddbde16b1d8ca1645ef0b728416e504b57b2e560589ffd060ad34e4a,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1722899473857623130,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-g9bml,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: fd474413-e416-48db-a7bf-f3c40675819b,},Annotations:map[string]string{io.kubernetes.container.hash: 1bd67db4,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"na
me\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e65205c398221a15eecea1ec1092d54f364a44886b05149400c7be5ffafc3285,PodSandboxId:0df1c00cbbb9d6891997d631537dd7662e552d8dca3cea20f0b653ed34f6f7bb,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1722899473821870209,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-vzhst,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f9c09745-be
29-4403-9e7d-f9e4eaae5cac,},Annotations:map[string]string{io.kubernetes.container.hash: 1a8c310a,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:97fa319bea82614cab7525f9052bcc8a09fad765b260045dbf0d0fa0ca0290b2,PodSandboxId:4f369251bc6de76b6eba2d8a6404cb53a6bcba17f58bd09854de9edd65d080fa,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:docker.io/kindest/kindnetd@sha256:4067b91686869e19bac601aec305ba55d2e74cdcb91347869bfb4fd3a26cd3c3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:917d7814b9b5b870a04ef7bbee5414485a7ba8385be9c26e1df27463f6fb6557,State:CO
NTAINER_RUNNING,CreatedAt:1722899461696934419,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-xqx4z,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8455705e-b140-4f1e-abff-6a71bbb5415f,},Annotations:map[string]string{io.kubernetes.container.hash: 4a9283b6,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:04c382fd4a32fe8685a6f643ecf7a291e4d542c2223975f9df92991fe566b12a,PodSandboxId:b7b77d3f5c8a24f9906eb41c479b7254cd21f7c4d0c34b7014bdfa5f666df829,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,State:CONTAINER_RUNNING,CreatedAt:172289945
7757340037,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-vj5sd,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d6c9cdcb-e1b7-44c8-a6e3-5e5aeb76ba03,},Annotations:map[string]string{io.kubernetes.container.hash: a40979c9,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:40fc9655d4bc3a83cded30a0628a93c01856e1db81e027d8d131004479df9ed3,PodSandboxId:8ece168043c14c199a06a5ef7db680c0d579fe87db735e94a6522f616365372e,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:0,},Image:&ImageSpec{Image:ghcr.io/kube-vip/kube-vip@sha256:360f0c5d02322075cc80edb9e4e0d2171e941e55072184f1f902203fafc81d0f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12,State:CONTAINER_RUNNING,CreatedAt:17228994417
23968430,Labels:map[string]string{io.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-044175,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 26033e5e6fae3c18f82268d3b219e4ab,},Annotations:map[string]string{io.kubernetes.container.hash: 3d55e6dc,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0c90a080943378c8bb82560d92b4399ff4ea03ab68d06f0de21852e1df609090,PodSandboxId:f0615d6a6ed3b0a919333497ebf049ca31c007ff3340b12a0a3b89c149d2558f,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,State:CONTAINER_RUNNING,CreatedAt:1722899438261300658,Labels:map[string]string
{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-044175,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5280d6dbae40883a34349dd31a13a779,},Annotations:map[string]string{io.kubernetes.container.hash: bd2d1b8f,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b0893967672c7dc591bbcf220e56601b8a46fc11f07e63adbadaddec59ec1803,PodSandboxId:c7f5da3aca5fb3bac198b9144677aac33c3f5317946dad29f46e726a35d2c596,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1722899438287785506,Labels:map[string]string{io.kubernetes.container.name:
etcd,io.kubernetes.pod.name: etcd-ha-044175,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 47fd3d59fe4024c671f4b57dbae12a83,},Annotations:map[string]string{io.kubernetes.container.hash: fa9a7bc3,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2a85f2254a23cdec7e89ff8de2e31b06ddf2853808330965760217f1fd834004,PodSandboxId:57dd6eb50740256e4db3c59d0c1d850b0ba784d01abbeb7f8ea139160576fc43,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,State:CONTAINER_RUNNING,CreatedAt:1722899438266855231,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: ku
be-scheduler-ha-044175,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 87091e6c521c934e57911d0cd84fc454,},Annotations:map[string]string{io.kubernetes.container.hash: 7337c8d9,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:52e65ab51d03f5a6abf04b86a788a251259de2c7971b7f676c0b5c5eb33e5849,PodSandboxId:41084305e84434e5136bb133632d08d27b3092395382f9508528787851465c5c,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,State:CONTAINER_RUNNING,CreatedAt:1722899438199945652,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-
controller-manager-ha-044175,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: de889c914a63f88b5552d92d7c04005b,},Annotations:map[string]string{io.kubernetes.container.hash: bcb4918f,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=d70b3491-430d-4951-8498-e6580e3db990 name=/runtime.v1.RuntimeService/ListContainers
	Aug 05 23:18:07 ha-044175 crio[684]: time="2024-08-05 23:18:07.298868424Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=145354f0-f86f-46aa-8418-bc61fb8e7ffc name=/runtime.v1.RuntimeService/Version
	Aug 05 23:18:07 ha-044175 crio[684]: time="2024-08-05 23:18:07.298962645Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=145354f0-f86f-46aa-8418-bc61fb8e7ffc name=/runtime.v1.RuntimeService/Version
	Aug 05 23:18:07 ha-044175 crio[684]: time="2024-08-05 23:18:07.299965605Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=7200d2bf-597b-4fe1-bd77-0fee62eaa552 name=/runtime.v1.ImageService/ImageFsInfo
	Aug 05 23:18:07 ha-044175 crio[684]: time="2024-08-05 23:18:07.300479919Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1722899887300455612,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:154770,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=7200d2bf-597b-4fe1-bd77-0fee62eaa552 name=/runtime.v1.ImageService/ImageFsInfo
	Aug 05 23:18:07 ha-044175 crio[684]: time="2024-08-05 23:18:07.301283356Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=8c6b681c-af54-4a5a-949f-246847a137a5 name=/runtime.v1.RuntimeService/ListContainers
	Aug 05 23:18:07 ha-044175 crio[684]: time="2024-08-05 23:18:07.301355558Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=8c6b681c-af54-4a5a-949f-246847a137a5 name=/runtime.v1.RuntimeService/ListContainers
	Aug 05 23:18:07 ha-044175 crio[684]: time="2024-08-05 23:18:07.301633341Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:14f7140ac408890dd788c7a9d6a9857531edad86ff751157ac035e6ab0d4afdc,PodSandboxId:1bf94d816bd6b0f9325f20c0b2453330291a5dfa79448419ddd925a97f951bb9,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1722899618925179407,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-fc5497c4f-wmfql,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: bfc8bad7-d43d-4beb-991e-339a4ce96ab5,},Annotations:map[string]string{io.kubernetes.container.hash: fc00d50e,io.kubernetes.container.restartCount: 0,io.kubernetes.c
ontainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5e8f17a7a758ce7d69c780273e3653b03bc4c01767911d236cad9862a3337e50,PodSandboxId:5d4208cbe441324fb59633dbd487e1e04ee180f1f9763a207a4979e68a4ab71e,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1722899473852759909,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d30d1a5b-cfbe-4de6-a964-75c32e5dbf62,},Annotations:map[string]string{io.kubernetes.container.hash: 4378961a,io.kubernetes.container.restartCount: 0,io.kubernetes.container.ter
minationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4617bbebfc992da16ee550b4c2c74a6d4c58299fe2518f6d24c3a10b1e02c941,PodSandboxId:449b4adbddbde16b1d8ca1645ef0b728416e504b57b2e560589ffd060ad34e4a,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1722899473857623130,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-g9bml,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: fd474413-e416-48db-a7bf-f3c40675819b,},Annotations:map[string]string{io.kubernetes.container.hash: 1bd67db4,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"na
me\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e65205c398221a15eecea1ec1092d54f364a44886b05149400c7be5ffafc3285,PodSandboxId:0df1c00cbbb9d6891997d631537dd7662e552d8dca3cea20f0b653ed34f6f7bb,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1722899473821870209,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-vzhst,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f9c09745-be
29-4403-9e7d-f9e4eaae5cac,},Annotations:map[string]string{io.kubernetes.container.hash: 1a8c310a,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:97fa319bea82614cab7525f9052bcc8a09fad765b260045dbf0d0fa0ca0290b2,PodSandboxId:4f369251bc6de76b6eba2d8a6404cb53a6bcba17f58bd09854de9edd65d080fa,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:docker.io/kindest/kindnetd@sha256:4067b91686869e19bac601aec305ba55d2e74cdcb91347869bfb4fd3a26cd3c3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:917d7814b9b5b870a04ef7bbee5414485a7ba8385be9c26e1df27463f6fb6557,State:CO
NTAINER_RUNNING,CreatedAt:1722899461696934419,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-xqx4z,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8455705e-b140-4f1e-abff-6a71bbb5415f,},Annotations:map[string]string{io.kubernetes.container.hash: 4a9283b6,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:04c382fd4a32fe8685a6f643ecf7a291e4d542c2223975f9df92991fe566b12a,PodSandboxId:b7b77d3f5c8a24f9906eb41c479b7254cd21f7c4d0c34b7014bdfa5f666df829,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,State:CONTAINER_RUNNING,CreatedAt:172289945
7757340037,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-vj5sd,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d6c9cdcb-e1b7-44c8-a6e3-5e5aeb76ba03,},Annotations:map[string]string{io.kubernetes.container.hash: a40979c9,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:40fc9655d4bc3a83cded30a0628a93c01856e1db81e027d8d131004479df9ed3,PodSandboxId:8ece168043c14c199a06a5ef7db680c0d579fe87db735e94a6522f616365372e,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:0,},Image:&ImageSpec{Image:ghcr.io/kube-vip/kube-vip@sha256:360f0c5d02322075cc80edb9e4e0d2171e941e55072184f1f902203fafc81d0f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12,State:CONTAINER_RUNNING,CreatedAt:17228994417
23968430,Labels:map[string]string{io.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-044175,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 26033e5e6fae3c18f82268d3b219e4ab,},Annotations:map[string]string{io.kubernetes.container.hash: 3d55e6dc,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0c90a080943378c8bb82560d92b4399ff4ea03ab68d06f0de21852e1df609090,PodSandboxId:f0615d6a6ed3b0a919333497ebf049ca31c007ff3340b12a0a3b89c149d2558f,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,State:CONTAINER_RUNNING,CreatedAt:1722899438261300658,Labels:map[string]string
{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-044175,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5280d6dbae40883a34349dd31a13a779,},Annotations:map[string]string{io.kubernetes.container.hash: bd2d1b8f,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b0893967672c7dc591bbcf220e56601b8a46fc11f07e63adbadaddec59ec1803,PodSandboxId:c7f5da3aca5fb3bac198b9144677aac33c3f5317946dad29f46e726a35d2c596,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1722899438287785506,Labels:map[string]string{io.kubernetes.container.name:
etcd,io.kubernetes.pod.name: etcd-ha-044175,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 47fd3d59fe4024c671f4b57dbae12a83,},Annotations:map[string]string{io.kubernetes.container.hash: fa9a7bc3,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2a85f2254a23cdec7e89ff8de2e31b06ddf2853808330965760217f1fd834004,PodSandboxId:57dd6eb50740256e4db3c59d0c1d850b0ba784d01abbeb7f8ea139160576fc43,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,State:CONTAINER_RUNNING,CreatedAt:1722899438266855231,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: ku
be-scheduler-ha-044175,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 87091e6c521c934e57911d0cd84fc454,},Annotations:map[string]string{io.kubernetes.container.hash: 7337c8d9,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:52e65ab51d03f5a6abf04b86a788a251259de2c7971b7f676c0b5c5eb33e5849,PodSandboxId:41084305e84434e5136bb133632d08d27b3092395382f9508528787851465c5c,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,State:CONTAINER_RUNNING,CreatedAt:1722899438199945652,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-
controller-manager-ha-044175,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: de889c914a63f88b5552d92d7c04005b,},Annotations:map[string]string{io.kubernetes.container.hash: bcb4918f,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=8c6b681c-af54-4a5a-949f-246847a137a5 name=/runtime.v1.RuntimeService/ListContainers
	Aug 05 23:18:07 ha-044175 crio[684]: time="2024-08-05 23:18:07.342900123Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=9eb61d51-b93c-4030-b13b-ad4f87938254 name=/runtime.v1.RuntimeService/Version
	Aug 05 23:18:07 ha-044175 crio[684]: time="2024-08-05 23:18:07.342992770Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=9eb61d51-b93c-4030-b13b-ad4f87938254 name=/runtime.v1.RuntimeService/Version
	Aug 05 23:18:07 ha-044175 crio[684]: time="2024-08-05 23:18:07.343937658Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=9589a65a-4f58-4101-b964-ba79ce23cdaf name=/runtime.v1.ImageService/ImageFsInfo
	Aug 05 23:18:07 ha-044175 crio[684]: time="2024-08-05 23:18:07.344770556Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1722899887344743708,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:154770,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=9589a65a-4f58-4101-b964-ba79ce23cdaf name=/runtime.v1.ImageService/ImageFsInfo
	Aug 05 23:18:07 ha-044175 crio[684]: time="2024-08-05 23:18:07.345340410Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=1a689208-e055-4a3e-bae7-85c65d2458f6 name=/runtime.v1.RuntimeService/ListContainers
	Aug 05 23:18:07 ha-044175 crio[684]: time="2024-08-05 23:18:07.345509326Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=1a689208-e055-4a3e-bae7-85c65d2458f6 name=/runtime.v1.RuntimeService/ListContainers
	Aug 05 23:18:07 ha-044175 crio[684]: time="2024-08-05 23:18:07.345740960Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:14f7140ac408890dd788c7a9d6a9857531edad86ff751157ac035e6ab0d4afdc,PodSandboxId:1bf94d816bd6b0f9325f20c0b2453330291a5dfa79448419ddd925a97f951bb9,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1722899618925179407,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-fc5497c4f-wmfql,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: bfc8bad7-d43d-4beb-991e-339a4ce96ab5,},Annotations:map[string]string{io.kubernetes.container.hash: fc00d50e,io.kubernetes.container.restartCount: 0,io.kubernetes.c
ontainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5e8f17a7a758ce7d69c780273e3653b03bc4c01767911d236cad9862a3337e50,PodSandboxId:5d4208cbe441324fb59633dbd487e1e04ee180f1f9763a207a4979e68a4ab71e,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1722899473852759909,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d30d1a5b-cfbe-4de6-a964-75c32e5dbf62,},Annotations:map[string]string{io.kubernetes.container.hash: 4378961a,io.kubernetes.container.restartCount: 0,io.kubernetes.container.ter
minationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4617bbebfc992da16ee550b4c2c74a6d4c58299fe2518f6d24c3a10b1e02c941,PodSandboxId:449b4adbddbde16b1d8ca1645ef0b728416e504b57b2e560589ffd060ad34e4a,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1722899473857623130,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-g9bml,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: fd474413-e416-48db-a7bf-f3c40675819b,},Annotations:map[string]string{io.kubernetes.container.hash: 1bd67db4,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"na
me\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e65205c398221a15eecea1ec1092d54f364a44886b05149400c7be5ffafc3285,PodSandboxId:0df1c00cbbb9d6891997d631537dd7662e552d8dca3cea20f0b653ed34f6f7bb,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1722899473821870209,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-vzhst,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f9c09745-be
29-4403-9e7d-f9e4eaae5cac,},Annotations:map[string]string{io.kubernetes.container.hash: 1a8c310a,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:97fa319bea82614cab7525f9052bcc8a09fad765b260045dbf0d0fa0ca0290b2,PodSandboxId:4f369251bc6de76b6eba2d8a6404cb53a6bcba17f58bd09854de9edd65d080fa,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:docker.io/kindest/kindnetd@sha256:4067b91686869e19bac601aec305ba55d2e74cdcb91347869bfb4fd3a26cd3c3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:917d7814b9b5b870a04ef7bbee5414485a7ba8385be9c26e1df27463f6fb6557,State:CO
NTAINER_RUNNING,CreatedAt:1722899461696934419,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-xqx4z,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8455705e-b140-4f1e-abff-6a71bbb5415f,},Annotations:map[string]string{io.kubernetes.container.hash: 4a9283b6,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:04c382fd4a32fe8685a6f643ecf7a291e4d542c2223975f9df92991fe566b12a,PodSandboxId:b7b77d3f5c8a24f9906eb41c479b7254cd21f7c4d0c34b7014bdfa5f666df829,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,State:CONTAINER_RUNNING,CreatedAt:172289945
7757340037,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-vj5sd,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d6c9cdcb-e1b7-44c8-a6e3-5e5aeb76ba03,},Annotations:map[string]string{io.kubernetes.container.hash: a40979c9,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:40fc9655d4bc3a83cded30a0628a93c01856e1db81e027d8d131004479df9ed3,PodSandboxId:8ece168043c14c199a06a5ef7db680c0d579fe87db735e94a6522f616365372e,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:0,},Image:&ImageSpec{Image:ghcr.io/kube-vip/kube-vip@sha256:360f0c5d02322075cc80edb9e4e0d2171e941e55072184f1f902203fafc81d0f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12,State:CONTAINER_RUNNING,CreatedAt:17228994417
23968430,Labels:map[string]string{io.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-044175,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 26033e5e6fae3c18f82268d3b219e4ab,},Annotations:map[string]string{io.kubernetes.container.hash: 3d55e6dc,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0c90a080943378c8bb82560d92b4399ff4ea03ab68d06f0de21852e1df609090,PodSandboxId:f0615d6a6ed3b0a919333497ebf049ca31c007ff3340b12a0a3b89c149d2558f,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,State:CONTAINER_RUNNING,CreatedAt:1722899438261300658,Labels:map[string]string
{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-044175,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5280d6dbae40883a34349dd31a13a779,},Annotations:map[string]string{io.kubernetes.container.hash: bd2d1b8f,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b0893967672c7dc591bbcf220e56601b8a46fc11f07e63adbadaddec59ec1803,PodSandboxId:c7f5da3aca5fb3bac198b9144677aac33c3f5317946dad29f46e726a35d2c596,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1722899438287785506,Labels:map[string]string{io.kubernetes.container.name:
etcd,io.kubernetes.pod.name: etcd-ha-044175,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 47fd3d59fe4024c671f4b57dbae12a83,},Annotations:map[string]string{io.kubernetes.container.hash: fa9a7bc3,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2a85f2254a23cdec7e89ff8de2e31b06ddf2853808330965760217f1fd834004,PodSandboxId:57dd6eb50740256e4db3c59d0c1d850b0ba784d01abbeb7f8ea139160576fc43,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,State:CONTAINER_RUNNING,CreatedAt:1722899438266855231,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: ku
be-scheduler-ha-044175,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 87091e6c521c934e57911d0cd84fc454,},Annotations:map[string]string{io.kubernetes.container.hash: 7337c8d9,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:52e65ab51d03f5a6abf04b86a788a251259de2c7971b7f676c0b5c5eb33e5849,PodSandboxId:41084305e84434e5136bb133632d08d27b3092395382f9508528787851465c5c,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,State:CONTAINER_RUNNING,CreatedAt:1722899438199945652,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-
controller-manager-ha-044175,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: de889c914a63f88b5552d92d7c04005b,},Annotations:map[string]string{io.kubernetes.container.hash: bcb4918f,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=1a689208-e055-4a3e-bae7-85c65d2458f6 name=/runtime.v1.RuntimeService/ListContainers
	Aug 05 23:18:07 ha-044175 crio[684]: time="2024-08-05 23:18:07.387468005Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=06d2a757-aae6-4d7d-b45f-ae2c611997c0 name=/runtime.v1.RuntimeService/Version
	Aug 05 23:18:07 ha-044175 crio[684]: time="2024-08-05 23:18:07.387554848Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=06d2a757-aae6-4d7d-b45f-ae2c611997c0 name=/runtime.v1.RuntimeService/Version
	Aug 05 23:18:07 ha-044175 crio[684]: time="2024-08-05 23:18:07.388821190Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=65b00f06-f8c5-48eb-b4c5-85a5d044de94 name=/runtime.v1.ImageService/ImageFsInfo
	Aug 05 23:18:07 ha-044175 crio[684]: time="2024-08-05 23:18:07.389241427Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1722899887389221345,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:154770,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=65b00f06-f8c5-48eb-b4c5-85a5d044de94 name=/runtime.v1.ImageService/ImageFsInfo
	Aug 05 23:18:07 ha-044175 crio[684]: time="2024-08-05 23:18:07.390611150Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=15914895-8db5-46b2-a0a9-a1beba9ef7bf name=/runtime.v1.RuntimeService/ListContainers
	Aug 05 23:18:07 ha-044175 crio[684]: time="2024-08-05 23:18:07.390682018Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=15914895-8db5-46b2-a0a9-a1beba9ef7bf name=/runtime.v1.RuntimeService/ListContainers
	Aug 05 23:18:07 ha-044175 crio[684]: time="2024-08-05 23:18:07.390905286Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:14f7140ac408890dd788c7a9d6a9857531edad86ff751157ac035e6ab0d4afdc,PodSandboxId:1bf94d816bd6b0f9325f20c0b2453330291a5dfa79448419ddd925a97f951bb9,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1722899618925179407,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-fc5497c4f-wmfql,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: bfc8bad7-d43d-4beb-991e-339a4ce96ab5,},Annotations:map[string]string{io.kubernetes.container.hash: fc00d50e,io.kubernetes.container.restartCount: 0,io.kubernetes.c
ontainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5e8f17a7a758ce7d69c780273e3653b03bc4c01767911d236cad9862a3337e50,PodSandboxId:5d4208cbe441324fb59633dbd487e1e04ee180f1f9763a207a4979e68a4ab71e,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1722899473852759909,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d30d1a5b-cfbe-4de6-a964-75c32e5dbf62,},Annotations:map[string]string{io.kubernetes.container.hash: 4378961a,io.kubernetes.container.restartCount: 0,io.kubernetes.container.ter
minationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4617bbebfc992da16ee550b4c2c74a6d4c58299fe2518f6d24c3a10b1e02c941,PodSandboxId:449b4adbddbde16b1d8ca1645ef0b728416e504b57b2e560589ffd060ad34e4a,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1722899473857623130,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-g9bml,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: fd474413-e416-48db-a7bf-f3c40675819b,},Annotations:map[string]string{io.kubernetes.container.hash: 1bd67db4,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"na
me\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e65205c398221a15eecea1ec1092d54f364a44886b05149400c7be5ffafc3285,PodSandboxId:0df1c00cbbb9d6891997d631537dd7662e552d8dca3cea20f0b653ed34f6f7bb,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1722899473821870209,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-vzhst,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f9c09745-be
29-4403-9e7d-f9e4eaae5cac,},Annotations:map[string]string{io.kubernetes.container.hash: 1a8c310a,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:97fa319bea82614cab7525f9052bcc8a09fad765b260045dbf0d0fa0ca0290b2,PodSandboxId:4f369251bc6de76b6eba2d8a6404cb53a6bcba17f58bd09854de9edd65d080fa,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:docker.io/kindest/kindnetd@sha256:4067b91686869e19bac601aec305ba55d2e74cdcb91347869bfb4fd3a26cd3c3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:917d7814b9b5b870a04ef7bbee5414485a7ba8385be9c26e1df27463f6fb6557,State:CO
NTAINER_RUNNING,CreatedAt:1722899461696934419,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-xqx4z,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8455705e-b140-4f1e-abff-6a71bbb5415f,},Annotations:map[string]string{io.kubernetes.container.hash: 4a9283b6,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:04c382fd4a32fe8685a6f643ecf7a291e4d542c2223975f9df92991fe566b12a,PodSandboxId:b7b77d3f5c8a24f9906eb41c479b7254cd21f7c4d0c34b7014bdfa5f666df829,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,State:CONTAINER_RUNNING,CreatedAt:172289945
7757340037,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-vj5sd,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d6c9cdcb-e1b7-44c8-a6e3-5e5aeb76ba03,},Annotations:map[string]string{io.kubernetes.container.hash: a40979c9,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:40fc9655d4bc3a83cded30a0628a93c01856e1db81e027d8d131004479df9ed3,PodSandboxId:8ece168043c14c199a06a5ef7db680c0d579fe87db735e94a6522f616365372e,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:0,},Image:&ImageSpec{Image:ghcr.io/kube-vip/kube-vip@sha256:360f0c5d02322075cc80edb9e4e0d2171e941e55072184f1f902203fafc81d0f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12,State:CONTAINER_RUNNING,CreatedAt:17228994417
23968430,Labels:map[string]string{io.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-044175,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 26033e5e6fae3c18f82268d3b219e4ab,},Annotations:map[string]string{io.kubernetes.container.hash: 3d55e6dc,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0c90a080943378c8bb82560d92b4399ff4ea03ab68d06f0de21852e1df609090,PodSandboxId:f0615d6a6ed3b0a919333497ebf049ca31c007ff3340b12a0a3b89c149d2558f,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,State:CONTAINER_RUNNING,CreatedAt:1722899438261300658,Labels:map[string]string
{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-044175,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5280d6dbae40883a34349dd31a13a779,},Annotations:map[string]string{io.kubernetes.container.hash: bd2d1b8f,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b0893967672c7dc591bbcf220e56601b8a46fc11f07e63adbadaddec59ec1803,PodSandboxId:c7f5da3aca5fb3bac198b9144677aac33c3f5317946dad29f46e726a35d2c596,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1722899438287785506,Labels:map[string]string{io.kubernetes.container.name:
etcd,io.kubernetes.pod.name: etcd-ha-044175,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 47fd3d59fe4024c671f4b57dbae12a83,},Annotations:map[string]string{io.kubernetes.container.hash: fa9a7bc3,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2a85f2254a23cdec7e89ff8de2e31b06ddf2853808330965760217f1fd834004,PodSandboxId:57dd6eb50740256e4db3c59d0c1d850b0ba784d01abbeb7f8ea139160576fc43,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,State:CONTAINER_RUNNING,CreatedAt:1722899438266855231,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: ku
be-scheduler-ha-044175,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 87091e6c521c934e57911d0cd84fc454,},Annotations:map[string]string{io.kubernetes.container.hash: 7337c8d9,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:52e65ab51d03f5a6abf04b86a788a251259de2c7971b7f676c0b5c5eb33e5849,PodSandboxId:41084305e84434e5136bb133632d08d27b3092395382f9508528787851465c5c,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,State:CONTAINER_RUNNING,CreatedAt:1722899438199945652,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-
controller-manager-ha-044175,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: de889c914a63f88b5552d92d7c04005b,},Annotations:map[string]string{io.kubernetes.container.hash: bcb4918f,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=15914895-8db5-46b2-a0a9-a1beba9ef7bf name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                 CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	14f7140ac4088       gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335   4 minutes ago       Running             busybox                   0                   1bf94d816bd6b       busybox-fc5497c4f-wmfql
	4617bbebfc992       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4                                      6 minutes ago       Running             coredns                   0                   449b4adbddbde       coredns-7db6d8ff4d-g9bml
	5e8f17a7a758c       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                      6 minutes ago       Running             storage-provisioner       0                   5d4208cbe4413       storage-provisioner
	e65205c398221       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4                                      6 minutes ago       Running             coredns                   0                   0df1c00cbbb9d       coredns-7db6d8ff4d-vzhst
	97fa319bea826       docker.io/kindest/kindnetd@sha256:4067b91686869e19bac601aec305ba55d2e74cdcb91347869bfb4fd3a26cd3c3    7 minutes ago       Running             kindnet-cni               0                   4f369251bc6de       kindnet-xqx4z
	04c382fd4a32f       55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1                                      7 minutes ago       Running             kube-proxy                0                   b7b77d3f5c8a2       kube-proxy-vj5sd
	40fc9655d4bc3       ghcr.io/kube-vip/kube-vip@sha256:360f0c5d02322075cc80edb9e4e0d2171e941e55072184f1f902203fafc81d0f     7 minutes ago       Running             kube-vip                  0                   8ece168043c14       kube-vip-ha-044175
	b0893967672c7       3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899                                      7 minutes ago       Running             etcd                      0                   c7f5da3aca5fb       etcd-ha-044175
	2a85f2254a23c       3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2                                      7 minutes ago       Running             kube-scheduler            0                   57dd6eb507402       kube-scheduler-ha-044175
	0c90a08094337       1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d                                      7 minutes ago       Running             kube-apiserver            0                   f0615d6a6ed3b       kube-apiserver-ha-044175
	52e65ab51d03f       76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e                                      7 minutes ago       Running             kube-controller-manager   0                   41084305e8443       kube-controller-manager-ha-044175
	
	
	==> coredns [4617bbebfc992da16ee550b4c2c74a6d4c58299fe2518f6d24c3a10b1e02c941] <==
	[INFO] 10.244.0.4:60064 - 5 "PTR IN 148.40.75.147.in-addr.arpa. udp 44 false 512" NXDOMAIN qr,rd,ra 44 0.002005742s
	[INFO] 10.244.2.2:39716 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000202045s
	[INFO] 10.244.2.2:55066 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000286931s
	[INFO] 10.244.2.2:34830 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000174873s
	[INFO] 10.244.1.2:45895 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000157477s
	[INFO] 10.244.1.2:49930 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000144865s
	[INFO] 10.244.1.2:45888 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000155945s
	[INFO] 10.244.1.2:59948 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000081002s
	[INFO] 10.244.0.4:36231 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000095138s
	[INFO] 10.244.0.4:40536 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000228107s
	[INFO] 10.244.0.4:41374 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.001409689s
	[INFO] 10.244.0.4:38989 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000065121s
	[INFO] 10.244.0.4:40466 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000080252s
	[INFO] 10.244.2.2:50462 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000101194s
	[INFO] 10.244.2.2:37087 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000067854s
	[INFO] 10.244.1.2:33354 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.00011049s
	[INFO] 10.244.1.2:46378 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000081449s
	[INFO] 10.244.1.2:35178 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000059454s
	[INFO] 10.244.0.4:36998 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000070571s
	[INFO] 10.244.0.4:58448 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000039944s
	[INFO] 10.244.2.2:44511 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000351558s
	[INFO] 10.244.2.2:49689 - 5 "PTR IN 1.39.168.192.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.000125275s
	[INFO] 10.244.1.2:53510 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000125157s
	[INFO] 10.244.0.4:59119 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000073623s
	[INFO] 10.244.0.4:42575 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000124164s
	
	
	==> coredns [e65205c398221a15eecea1ec1092d54f364a44886b05149400c7be5ffafc3285] <==
	[INFO] 10.244.1.2:48958 - 5 "PTR IN 148.40.75.147.in-addr.arpa. udp 44 false 512" NXDOMAIN qr,rd,ra 44 0.001900539s
	[INFO] 10.244.2.2:35523 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000199288s
	[INFO] 10.244.2.2:44169 - 3 "AAAA IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.003510071s
	[INFO] 10.244.2.2:35265 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.00015048s
	[INFO] 10.244.2.2:55592 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.003101906s
	[INFO] 10.244.2.2:56153 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.00013893s
	[INFO] 10.244.1.2:33342 - 3 "AAAA IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.001850863s
	[INFO] 10.244.1.2:42287 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000148733s
	[INFO] 10.244.1.2:54735 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000100517s
	[INFO] 10.244.1.2:59789 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.001317452s
	[INFO] 10.244.0.4:40404 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000074048s
	[INFO] 10.244.0.4:48828 - 3 "AAAA IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.002066965s
	[INFO] 10.244.0.4:45447 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000152682s
	[INFO] 10.244.2.2:44344 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000146254s
	[INFO] 10.244.2.2:44960 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000197937s
	[INFO] 10.244.1.2:46098 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000107825s
	[INFO] 10.244.0.4:53114 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000104641s
	[INFO] 10.244.0.4:55920 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000073557s
	[INFO] 10.244.2.2:36832 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.0001192s
	[INFO] 10.244.2.2:36836 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.00014154s
	[INFO] 10.244.1.2:35009 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.00021099s
	[INFO] 10.244.1.2:49630 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.00009192s
	[INFO] 10.244.1.2:49164 - 5 "PTR IN 1.39.168.192.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.000128354s
	[INFO] 10.244.0.4:33938 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000080255s
	[INFO] 10.244.0.4:34551 - 5 "PTR IN 1.39.168.192.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.000092007s
	
	
	==> describe nodes <==
	Name:               ha-044175
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-044175
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=a179f531dd2dbe55e0d6074abcbc378280f91bb4
	                    minikube.k8s.io/name=ha-044175
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_08_05T23_10_45_0700
	                    minikube.k8s.io/version=v1.33.1
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 05 Aug 2024 23:10:44 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-044175
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 05 Aug 2024 23:18:04 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 05 Aug 2024 23:13:48 +0000   Mon, 05 Aug 2024 23:10:44 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 05 Aug 2024 23:13:48 +0000   Mon, 05 Aug 2024 23:10:44 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 05 Aug 2024 23:13:48 +0000   Mon, 05 Aug 2024 23:10:44 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 05 Aug 2024 23:13:48 +0000   Mon, 05 Aug 2024 23:11:13 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.57
	  Hostname:    ha-044175
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 a7535c9f09f54963b658b49234079761
	  System UUID:                a7535c9f-09f5-4963-b658-b49234079761
	  Boot ID:                    97ae6699-97e9-4260-9f54-aa4546b6e1f0
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.30.3
	  Kube-Proxy Version:         v1.30.3
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (11 in total)
	  Namespace                   Name                                 CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                 ------------  ----------  ---------------  -------------  ---
	  default                     busybox-fc5497c4f-wmfql              0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m31s
	  kube-system                 coredns-7db6d8ff4d-g9bml             100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     7m10s
	  kube-system                 coredns-7db6d8ff4d-vzhst             100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     7m10s
	  kube-system                 etcd-ha-044175                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (4%!)(MISSING)       0 (0%!)(MISSING)         7m23s
	  kube-system                 kindnet-xqx4z                        100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (2%!)(MISSING)        50Mi (2%!)(MISSING)      7m10s
	  kube-system                 kube-apiserver-ha-044175             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         7m23s
	  kube-system                 kube-controller-manager-ha-044175    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         7m23s
	  kube-system                 kube-proxy-vj5sd                     0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         7m10s
	  kube-system                 kube-scheduler-ha-044175             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         7m23s
	  kube-system                 kube-vip-ha-044175                   0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         7m23s
	  kube-system                 storage-provisioner                  0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         7m9s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                950m (47%!)(MISSING)   100m (5%!)(MISSING)
	  memory             290Mi (13%!)(MISSING)  390Mi (18%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)       0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)       0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age    From             Message
	  ----    ------                   ----   ----             -------
	  Normal  Starting                 7m9s   kube-proxy       
	  Normal  Starting                 7m23s  kubelet          Starting kubelet.
	  Normal  NodeAllocatableEnforced  7m23s  kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  7m23s  kubelet          Node ha-044175 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    7m23s  kubelet          Node ha-044175 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     7m23s  kubelet          Node ha-044175 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           7m11s  node-controller  Node ha-044175 event: Registered Node ha-044175 in Controller
	  Normal  NodeReady                6m54s  kubelet          Node ha-044175 status is now: NodeReady
	  Normal  RegisteredNode           6m     node-controller  Node ha-044175 event: Registered Node ha-044175 in Controller
	  Normal  RegisteredNode           4m43s  node-controller  Node ha-044175 event: Registered Node ha-044175 in Controller
	
	
	Name:               ha-044175-m02
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-044175-m02
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=a179f531dd2dbe55e0d6074abcbc378280f91bb4
	                    minikube.k8s.io/name=ha-044175
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_08_05T23_11_52_0700
	                    minikube.k8s.io/version=v1.33.1
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 05 Aug 2024 23:11:49 +0000
	Taints:             node.kubernetes.io/unreachable:NoExecute
	                    node.kubernetes.io/unreachable:NoSchedule
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-044175-m02
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 05 Aug 2024 23:14:54 +0000
	Conditions:
	  Type             Status    LastHeartbeatTime                 LastTransitionTime                Reason              Message
	  ----             ------    -----------------                 ------------------                ------              -------
	  MemoryPressure   Unknown   Mon, 05 Aug 2024 23:13:52 +0000   Mon, 05 Aug 2024 23:15:34 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  DiskPressure     Unknown   Mon, 05 Aug 2024 23:13:52 +0000   Mon, 05 Aug 2024 23:15:34 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  PIDPressure      Unknown   Mon, 05 Aug 2024 23:13:52 +0000   Mon, 05 Aug 2024 23:15:34 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  Ready            Unknown   Mon, 05 Aug 2024 23:13:52 +0000   Mon, 05 Aug 2024 23:15:34 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	Addresses:
	  InternalIP:  192.168.39.112
	  Hostname:    ha-044175-m02
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 3b8a8f60868345a4bc1ba1393dbdecaf
	  System UUID:                3b8a8f60-8683-45a4-bc1b-a1393dbdecaf
	  Boot ID:                    fc606ffa-9f64-4457-a949-4b120e918d6b
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.30.3
	  Kube-Proxy Version:         v1.30.3
	PodCIDR:                      10.244.1.0/24
	PodCIDRs:                     10.244.1.0/24
	Non-terminated Pods:          (8 in total)
	  Namespace                   Name                                     CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                     ------------  ----------  ---------------  -------------  ---
	  default                     busybox-fc5497c4f-tpqpw                  0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m31s
	  kube-system                 etcd-ha-044175-m02                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (4%!)(MISSING)       0 (0%!)(MISSING)         6m15s
	  kube-system                 kindnet-hqhgc                            100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (2%!)(MISSING)        50Mi (2%!)(MISSING)      6m17s
	  kube-system                 kube-apiserver-ha-044175-m02             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         6m15s
	  kube-system                 kube-controller-manager-ha-044175-m02    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         6m13s
	  kube-system                 kube-proxy-jfs9q                         0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         6m17s
	  kube-system                 kube-scheduler-ha-044175-m02             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         6m17s
	  kube-system                 kube-vip-ha-044175-m02                   0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         6m12s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%!)(MISSING)  100m (5%!)(MISSING)
	  memory             150Mi (7%!)(MISSING)  50Mi (2%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                    From             Message
	  ----    ------                   ----                   ----             -------
	  Normal  Starting                 6m13s                  kube-proxy       
	  Normal  NodeAllocatableEnforced  6m18s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  6m17s (x8 over 6m18s)  kubelet          Node ha-044175-m02 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    6m17s (x8 over 6m18s)  kubelet          Node ha-044175-m02 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     6m17s (x7 over 6m18s)  kubelet          Node ha-044175-m02 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           6m16s                  node-controller  Node ha-044175-m02 event: Registered Node ha-044175-m02 in Controller
	  Normal  RegisteredNode           6m                     node-controller  Node ha-044175-m02 event: Registered Node ha-044175-m02 in Controller
	  Normal  RegisteredNode           4m43s                  node-controller  Node ha-044175-m02 event: Registered Node ha-044175-m02 in Controller
	  Normal  NodeNotReady             2m33s                  node-controller  Node ha-044175-m02 status is now: NodeNotReady
	
	
	Name:               ha-044175-m03
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-044175-m03
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=a179f531dd2dbe55e0d6074abcbc378280f91bb4
	                    minikube.k8s.io/name=ha-044175
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_08_05T23_13_09_0700
	                    minikube.k8s.io/version=v1.33.1
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 05 Aug 2024 23:13:06 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-044175-m03
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 05 Aug 2024 23:18:02 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 05 Aug 2024 23:14:07 +0000   Mon, 05 Aug 2024 23:13:06 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 05 Aug 2024 23:14:07 +0000   Mon, 05 Aug 2024 23:13:06 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 05 Aug 2024 23:14:07 +0000   Mon, 05 Aug 2024 23:13:06 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 05 Aug 2024 23:14:07 +0000   Mon, 05 Aug 2024 23:13:28 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.201
	  Hostname:    ha-044175-m03
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 37d1f61608a14177b68f3f2d22a59a87
	  System UUID:                37d1f616-08a1-4177-b68f-3f2d22a59a87
	  Boot ID:                    7e4c1f16-18ce-41f6-83cb-3892189ef49a
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.30.3
	  Kube-Proxy Version:         v1.30.3
	PodCIDR:                      10.244.2.0/24
	PodCIDRs:                     10.244.2.0/24
	Non-terminated Pods:          (8 in total)
	  Namespace                   Name                                     CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                     ------------  ----------  ---------------  -------------  ---
	  default                     busybox-fc5497c4f-fqp2t                  0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m31s
	  kube-system                 etcd-ha-044175-m03                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (4%!)(MISSING)       0 (0%!)(MISSING)         4m59s
	  kube-system                 kindnet-mc7wf                            100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (2%!)(MISSING)        50Mi (2%!)(MISSING)      5m1s
	  kube-system                 kube-apiserver-ha-044175-m03             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m59s
	  kube-system                 kube-controller-manager-ha-044175-m03    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m57s
	  kube-system                 kube-proxy-4ql5l                         0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         5m1s
	  kube-system                 kube-scheduler-ha-044175-m03             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m58s
	  kube-system                 kube-vip-ha-044175-m03                   0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m56s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%!)(MISSING)  100m (5%!)(MISSING)
	  memory             150Mi (7%!)(MISSING)  50Mi (2%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                  From             Message
	  ----    ------                   ----                 ----             -------
	  Normal  Starting                 4m56s                kube-proxy       
	  Normal  RegisteredNode           5m1s                 node-controller  Node ha-044175-m03 event: Registered Node ha-044175-m03 in Controller
	  Normal  NodeHasSufficientMemory  5m1s (x8 over 5m1s)  kubelet          Node ha-044175-m03 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    5m1s (x8 over 5m1s)  kubelet          Node ha-044175-m03 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     5m1s (x7 over 5m1s)  kubelet          Node ha-044175-m03 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  5m1s                 kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           5m                   node-controller  Node ha-044175-m03 event: Registered Node ha-044175-m03 in Controller
	  Normal  RegisteredNode           4m43s                node-controller  Node ha-044175-m03 event: Registered Node ha-044175-m03 in Controller
	
	
	Name:               ha-044175-m04
	Roles:              <none>
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-044175-m04
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=a179f531dd2dbe55e0d6074abcbc378280f91bb4
	                    minikube.k8s.io/name=ha-044175
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_08_05T23_14_14_0700
	                    minikube.k8s.io/version=v1.33.1
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 05 Aug 2024 23:14:13 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-044175-m04
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 05 Aug 2024 23:17:58 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 05 Aug 2024 23:14:43 +0000   Mon, 05 Aug 2024 23:14:13 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 05 Aug 2024 23:14:43 +0000   Mon, 05 Aug 2024 23:14:13 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 05 Aug 2024 23:14:43 +0000   Mon, 05 Aug 2024 23:14:13 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 05 Aug 2024 23:14:43 +0000   Mon, 05 Aug 2024 23:14:34 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.228
	  Hostname:    ha-044175-m04
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 0d2536a5615e49c8bf2cb4a8d6f85b2f
	  System UUID:                0d2536a5-615e-49c8-bf2c-b4a8d6f85b2f
	  Boot ID:                    588f7741-6c69-4d39-a219-0c7b28545f45
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.30.3
	  Kube-Proxy Version:         v1.30.3
	PodCIDR:                      10.244.3.0/24
	PodCIDRs:                     10.244.3.0/24
	Non-terminated Pods:          (2 in total)
	  Namespace                   Name                CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                ------------  ----------  ---------------  -------------  ---
	  kube-system                 kindnet-2rpdm       100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (2%!)(MISSING)        50Mi (2%!)(MISSING)      3m52s
	  kube-system                 kube-proxy-r5567    0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         3m54s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests   Limits
	  --------           --------   ------
	  cpu                100m (5%!)(MISSING)  100m (5%!)(MISSING)
	  memory             50Mi (2%!)(MISSING)  50Mi (2%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)     0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)     0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                    From             Message
	  ----    ------                   ----                   ----             -------
	  Normal  Starting                 3m48s                  kube-proxy       
	  Normal  NodeHasSufficientMemory  3m54s (x2 over 3m54s)  kubelet          Node ha-044175-m04 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    3m54s (x2 over 3m54s)  kubelet          Node ha-044175-m04 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     3m54s (x2 over 3m54s)  kubelet          Node ha-044175-m04 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  3m54s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           3m53s                  node-controller  Node ha-044175-m04 event: Registered Node ha-044175-m04 in Controller
	  Normal  RegisteredNode           3m51s                  node-controller  Node ha-044175-m04 event: Registered Node ha-044175-m04 in Controller
	  Normal  RegisteredNode           3m50s                  node-controller  Node ha-044175-m04 event: Registered Node ha-044175-m04 in Controller
	  Normal  NodeReady                3m33s                  kubelet          Node ha-044175-m04 status is now: NodeReady
	
	
	==> dmesg <==
	[Aug 5 23:10] You have booted with nomodeset. This means your GPU drivers are DISABLED
	[  +0.000000] Any video related functionality will be severely degraded, and you may not even be able to suspend the system properly
	[  +0.000001] Unless you actually understand what nomodeset does, you should reboot without enabling it
	[  +0.051183] Spectre V2 : WARNING: Unprivileged eBPF is enabled with eIBRS on, data leaks possible via Spectre v2 BHB attacks!
	[  +0.040172] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended PCI configuration space under this bridge.
	[  +4.836800] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	[  +2.566464] systemd-fstab-generator[116]: Ignoring "noauto" option for root device
	[  +4.616407] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000006] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000003] NFSD: Unable to initialize client recovery tracking! (-2)
	[ +10.214810] systemd-fstab-generator[601]: Ignoring "noauto" option for root device
	[  +0.059894] kauditd_printk_skb: 1 callbacks suppressed
	[  +0.066481] systemd-fstab-generator[613]: Ignoring "noauto" option for root device
	[  +0.165121] systemd-fstab-generator[627]: Ignoring "noauto" option for root device
	[  +0.129651] systemd-fstab-generator[639]: Ignoring "noauto" option for root device
	[  +0.275605] systemd-fstab-generator[668]: Ignoring "noauto" option for root device
	[  +4.344469] systemd-fstab-generator[777]: Ignoring "noauto" option for root device
	[  +0.058179] kauditd_printk_skb: 130 callbacks suppressed
	[  +4.730128] systemd-fstab-generator[959]: Ignoring "noauto" option for root device
	[  +0.903161] kauditd_printk_skb: 46 callbacks suppressed
	[  +6.792303] systemd-fstab-generator[1383]: Ignoring "noauto" option for root device
	[  +0.087803] kauditd_printk_skb: 51 callbacks suppressed
	[ +13.188886] kauditd_printk_skb: 21 callbacks suppressed
	[Aug 5 23:11] kauditd_printk_skb: 35 callbacks suppressed
	[ +53.752834] kauditd_printk_skb: 24 callbacks suppressed
	
	
	==> etcd [b0893967672c7dc591bbcf220e56601b8a46fc11f07e63adbadaddec59ec1803] <==
	{"level":"warn","ts":"2024-08-05T23:18:07.652811Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"79ee2fa200dbf73d","from":"79ee2fa200dbf73d","remote-peer-id":"74b01d9147cbb35","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-08-05T23:18:07.657457Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"79ee2fa200dbf73d","from":"79ee2fa200dbf73d","remote-peer-id":"74b01d9147cbb35","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-08-05T23:18:07.671304Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"79ee2fa200dbf73d","from":"79ee2fa200dbf73d","remote-peer-id":"74b01d9147cbb35","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-08-05T23:18:07.67661Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"79ee2fa200dbf73d","from":"79ee2fa200dbf73d","remote-peer-id":"74b01d9147cbb35","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-08-05T23:18:07.679537Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"79ee2fa200dbf73d","from":"79ee2fa200dbf73d","remote-peer-id":"74b01d9147cbb35","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-08-05T23:18:07.690938Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"79ee2fa200dbf73d","from":"79ee2fa200dbf73d","remote-peer-id":"74b01d9147cbb35","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-08-05T23:18:07.694462Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"79ee2fa200dbf73d","from":"79ee2fa200dbf73d","remote-peer-id":"74b01d9147cbb35","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-08-05T23:18:07.698028Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"79ee2fa200dbf73d","from":"79ee2fa200dbf73d","remote-peer-id":"74b01d9147cbb35","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-08-05T23:18:07.707239Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"79ee2fa200dbf73d","from":"79ee2fa200dbf73d","remote-peer-id":"74b01d9147cbb35","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-08-05T23:18:07.713194Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"79ee2fa200dbf73d","from":"79ee2fa200dbf73d","remote-peer-id":"74b01d9147cbb35","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-08-05T23:18:07.720007Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"79ee2fa200dbf73d","from":"79ee2fa200dbf73d","remote-peer-id":"74b01d9147cbb35","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-08-05T23:18:07.721636Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"79ee2fa200dbf73d","from":"79ee2fa200dbf73d","remote-peer-id":"74b01d9147cbb35","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-08-05T23:18:07.72566Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"79ee2fa200dbf73d","from":"79ee2fa200dbf73d","remote-peer-id":"74b01d9147cbb35","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-08-05T23:18:07.729346Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"79ee2fa200dbf73d","from":"79ee2fa200dbf73d","remote-peer-id":"74b01d9147cbb35","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-08-05T23:18:07.730831Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"79ee2fa200dbf73d","from":"79ee2fa200dbf73d","remote-peer-id":"74b01d9147cbb35","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-08-05T23:18:07.731817Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"79ee2fa200dbf73d","from":"79ee2fa200dbf73d","remote-peer-id":"74b01d9147cbb35","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-08-05T23:18:07.74354Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"79ee2fa200dbf73d","from":"79ee2fa200dbf73d","remote-peer-id":"74b01d9147cbb35","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-08-05T23:18:07.751246Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"79ee2fa200dbf73d","from":"79ee2fa200dbf73d","remote-peer-id":"74b01d9147cbb35","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-08-05T23:18:07.759475Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"79ee2fa200dbf73d","from":"79ee2fa200dbf73d","remote-peer-id":"74b01d9147cbb35","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-08-05T23:18:07.764432Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"79ee2fa200dbf73d","from":"79ee2fa200dbf73d","remote-peer-id":"74b01d9147cbb35","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-08-05T23:18:07.770129Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"79ee2fa200dbf73d","from":"79ee2fa200dbf73d","remote-peer-id":"74b01d9147cbb35","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-08-05T23:18:07.775356Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"79ee2fa200dbf73d","from":"79ee2fa200dbf73d","remote-peer-id":"74b01d9147cbb35","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-08-05T23:18:07.778143Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"79ee2fa200dbf73d","from":"79ee2fa200dbf73d","remote-peer-id":"74b01d9147cbb35","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-08-05T23:18:07.787531Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"79ee2fa200dbf73d","from":"79ee2fa200dbf73d","remote-peer-id":"74b01d9147cbb35","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-08-05T23:18:07.796164Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"79ee2fa200dbf73d","from":"79ee2fa200dbf73d","remote-peer-id":"74b01d9147cbb35","remote-peer-name":"pipeline","remote-peer-active":false}
	
	
	==> kernel <==
	 23:18:07 up 8 min,  0 users,  load average: 0.36, 0.29, 0.16
	Linux ha-044175 5.10.207 #1 SMP Mon Jul 29 15:19:02 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kindnet [97fa319bea82614cab7525f9052bcc8a09fad765b260045dbf0d0fa0ca0290b2] <==
	I0805 23:17:32.761979       1 main.go:322] Node ha-044175-m04 has CIDR [10.244.3.0/24] 
	I0805 23:17:42.764912       1 main.go:295] Handling node with IPs: map[192.168.39.201:{}]
	I0805 23:17:42.764992       1 main.go:322] Node ha-044175-m03 has CIDR [10.244.2.0/24] 
	I0805 23:17:42.765260       1 main.go:295] Handling node with IPs: map[192.168.39.228:{}]
	I0805 23:17:42.765304       1 main.go:322] Node ha-044175-m04 has CIDR [10.244.3.0/24] 
	I0805 23:17:42.765491       1 main.go:295] Handling node with IPs: map[192.168.39.57:{}]
	I0805 23:17:42.765527       1 main.go:299] handling current node
	I0805 23:17:42.765545       1 main.go:295] Handling node with IPs: map[192.168.39.112:{}]
	I0805 23:17:42.765552       1 main.go:322] Node ha-044175-m02 has CIDR [10.244.1.0/24] 
	I0805 23:17:52.765910       1 main.go:295] Handling node with IPs: map[192.168.39.57:{}]
	I0805 23:17:52.766039       1 main.go:299] handling current node
	I0805 23:17:52.766088       1 main.go:295] Handling node with IPs: map[192.168.39.112:{}]
	I0805 23:17:52.766111       1 main.go:322] Node ha-044175-m02 has CIDR [10.244.1.0/24] 
	I0805 23:17:52.766248       1 main.go:295] Handling node with IPs: map[192.168.39.201:{}]
	I0805 23:17:52.766277       1 main.go:322] Node ha-044175-m03 has CIDR [10.244.2.0/24] 
	I0805 23:17:52.766347       1 main.go:295] Handling node with IPs: map[192.168.39.228:{}]
	I0805 23:17:52.766489       1 main.go:322] Node ha-044175-m04 has CIDR [10.244.3.0/24] 
	I0805 23:18:02.757795       1 main.go:295] Handling node with IPs: map[192.168.39.228:{}]
	I0805 23:18:02.757901       1 main.go:322] Node ha-044175-m04 has CIDR [10.244.3.0/24] 
	I0805 23:18:02.758099       1 main.go:295] Handling node with IPs: map[192.168.39.57:{}]
	I0805 23:18:02.758123       1 main.go:299] handling current node
	I0805 23:18:02.758146       1 main.go:295] Handling node with IPs: map[192.168.39.112:{}]
	I0805 23:18:02.758161       1 main.go:322] Node ha-044175-m02 has CIDR [10.244.1.0/24] 
	I0805 23:18:02.758219       1 main.go:295] Handling node with IPs: map[192.168.39.201:{}]
	I0805 23:18:02.758246       1 main.go:322] Node ha-044175-m03 has CIDR [10.244.2.0/24] 
	
	
	==> kube-apiserver [0c90a080943378c8bb82560d92b4399ff4ea03ab68d06f0de21852e1df609090] <==
	I0805 23:10:44.553265       1 controller.go:615] quota admission added evaluator for: deployments.apps
	I0805 23:10:44.571802       1 alloc.go:330] "allocated clusterIPs" service="kube-system/kube-dns" clusterIPs={"IPv4":"10.96.0.10"}
	I0805 23:10:44.705277       1 controller.go:615] quota admission added evaluator for: daemonsets.apps
	I0805 23:10:56.995623       1 controller.go:615] quota admission added evaluator for: replicasets.apps
	I0805 23:10:57.087672       1 controller.go:615] quota admission added evaluator for: controllerrevisions.apps
	E0805 23:13:07.385530       1 finisher.go:175] FinishRequest: post-timeout activity - time-elapsed: 11.915µs, panicked: false, err: context canceled, panic-reason: <nil>
	E0805 23:13:07.386344       1 writers.go:122] apiserver was unable to write a JSON response: http: Handler timeout
	E0805 23:13:07.387157       1 status.go:71] apiserver received an error that is not an metav1.Status: &errors.errorString{s:"http: Handler timeout"}: http: Handler timeout
	E0805 23:13:07.388336       1 writers.go:135] apiserver was unable to write a fallback JSON response: http: Handler timeout
	E0805 23:13:07.388520       1 timeout.go:142] post-timeout activity - time-elapsed: 1.975722ms, POST "/api/v1/namespaces/kube-system/events" result: <nil>
	E0805 23:13:40.970037       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:59396: use of closed network connection
	E0805 23:13:41.351341       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:59424: use of closed network connection
	E0805 23:13:41.554982       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:59442: use of closed network connection
	E0805 23:13:41.744646       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:59466: use of closed network connection
	E0805 23:13:41.925027       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:59488: use of closed network connection
	E0805 23:13:42.116864       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:59512: use of closed network connection
	E0805 23:13:42.297555       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:59532: use of closed network connection
	E0805 23:13:42.533986       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:59540: use of closed network connection
	E0805 23:13:42.836007       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:59562: use of closed network connection
	E0805 23:13:43.015513       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:59590: use of closed network connection
	E0805 23:13:43.210146       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:59614: use of closed network connection
	E0805 23:13:43.384149       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:59630: use of closed network connection
	E0805 23:13:43.564630       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:59640: use of closed network connection
	E0805 23:13:43.739822       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:59656: use of closed network connection
	W0805 23:15:02.886672       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.168.39.201 192.168.39.57]
	
	
	==> kube-controller-manager [52e65ab51d03f5a6abf04b86a788a251259de2c7971b7f676c0b5c5eb33e5849] <==
	I0805 23:13:06.571041       1 range_allocator.go:381] "Set node PodCIDR" logger="node-ipam-controller" node="ha-044175-m03" podCIDRs=["10.244.2.0/24"]
	I0805 23:13:06.703870       1 node_lifecycle_controller.go:879] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="ha-044175-m03"
	I0805 23:13:36.099644       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="119.430442ms"
	I0805 23:13:36.261859       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="160.151474ms"
	I0805 23:13:36.460678       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="190.445386ms"
	E0805 23:13:36.460940       1 replica_set.go:557] sync "default/busybox-fc5497c4f" failed with Operation cannot be fulfilled on replicasets.apps "busybox-fc5497c4f": the object has been modified; please apply your changes to the latest version and try again
	I0805 23:13:36.461609       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="512.602µs"
	I0805 23:13:36.468238       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="65.566µs"
	I0805 23:13:36.594565       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="62.8598ms"
	I0805 23:13:36.594699       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="70.571µs"
	I0805 23:13:39.500676       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="17.430076ms"
	I0805 23:13:39.500785       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="51.167µs"
	I0805 23:13:39.752849       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="14.231938ms"
	I0805 23:13:39.753049       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="59.957µs"
	I0805 23:13:39.956312       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="55.499µs"
	I0805 23:13:40.560184       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="42.547403ms"
	I0805 23:13:40.560332       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="69.584µs"
	E0805 23:14:13.364835       1 certificate_controller.go:146] Sync csr-kff4f failed with : error updating signature for csr: Operation cannot be fulfilled on certificatesigningrequests.certificates.k8s.io "csr-kff4f": the object has been modified; please apply your changes to the latest version and try again
	I0805 23:14:13.639773       1 actual_state_of_world.go:543] "Failed to update statusUpdateNeeded field in actual state of world" logger="persistentvolume-attach-detach-controller" err="Failed to set statusUpdateNeeded to needed true, because nodeName=\"ha-044175-m04\" does not exist"
	I0805 23:14:13.667820       1 range_allocator.go:381] "Set node PodCIDR" logger="node-ipam-controller" node="ha-044175-m04" podCIDRs=["10.244.3.0/24"]
	I0805 23:14:16.739704       1 node_lifecycle_controller.go:879] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="ha-044175-m04"
	I0805 23:14:34.661448       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="ha-044175-m04"
	I0805 23:15:34.658711       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="ha-044175-m04"
	I0805 23:15:34.861295       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="18.798335ms"
	I0805 23:15:34.864183       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="2.779398ms"
	
	
	==> kube-proxy [04c382fd4a32fe8685a6f643ecf7a291e4d542c2223975f9df92991fe566b12a] <==
	I0805 23:10:58.215353       1 server_linux.go:69] "Using iptables proxy"
	I0805 23:10:58.317044       1 server.go:1062] "Successfully retrieved node IP(s)" IPs=["192.168.39.57"]
	I0805 23:10:58.380977       1 server_linux.go:143] "No iptables support for family" ipFamily="IPv6"
	I0805 23:10:58.381046       1 server.go:661] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0805 23:10:58.381064       1 server_linux.go:165] "Using iptables Proxier"
	I0805 23:10:58.385444       1 proxier.go:243] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I0805 23:10:58.385706       1 server.go:872] "Version info" version="v1.30.3"
	I0805 23:10:58.385735       1 server.go:874] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0805 23:10:58.388101       1 config.go:192] "Starting service config controller"
	I0805 23:10:58.388578       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0805 23:10:58.388682       1 config.go:101] "Starting endpoint slice config controller"
	I0805 23:10:58.388703       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0805 23:10:58.391001       1 config.go:319] "Starting node config controller"
	I0805 23:10:58.391039       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0805 23:10:58.489499       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I0805 23:10:58.489653       1 shared_informer.go:320] Caches are synced for service config
	I0805 23:10:58.491225       1 shared_informer.go:320] Caches are synced for node config
	
	
	==> kube-scheduler [2a85f2254a23cdec7e89ff8de2e31b06ddf2853808330965760217f1fd834004] <==
	W0805 23:10:42.174988       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E0805 23:10:42.175038       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	W0805 23:10:42.233253       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0805 23:10:42.233303       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	W0805 23:10:42.397575       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0805 23:10:42.397708       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	W0805 23:10:42.441077       1 reflector.go:547] runtime/asm_amd64.s:1695: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0805 23:10:42.441194       1 reflector.go:150] runtime/asm_amd64.s:1695: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	W0805 23:10:42.451687       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E0805 23:10:42.451734       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	I0805 23:10:44.475061       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	E0805 23:13:36.096100       1 framework.go:1286] "Plugin Failed" err="Operation cannot be fulfilled on pods/binding \"busybox-fc5497c4f-wmfql\": pod busybox-fc5497c4f-wmfql is already assigned to node \"ha-044175\"" plugin="DefaultBinder" pod="default/busybox-fc5497c4f-wmfql" node="ha-044175"
	E0805 23:13:36.096217       1 schedule_one.go:338] "scheduler cache ForgetPod failed" err="pod bfc8bad7-d43d-4beb-991e-339a4ce96ab5(default/busybox-fc5497c4f-wmfql) wasn't assumed so cannot be forgotten" pod="default/busybox-fc5497c4f-wmfql"
	E0805 23:13:36.096246       1 schedule_one.go:1046] "Error scheduling pod; retrying" err="running Bind plugin \"DefaultBinder\": Operation cannot be fulfilled on pods/binding \"busybox-fc5497c4f-wmfql\": pod busybox-fc5497c4f-wmfql is already assigned to node \"ha-044175\"" pod="default/busybox-fc5497c4f-wmfql"
	I0805 23:13:36.096326       1 schedule_one.go:1059] "Pod has been assigned to node. Abort adding it back to queue." pod="default/busybox-fc5497c4f-wmfql" node="ha-044175"
	E0805 23:13:36.098987       1 framework.go:1286] "Plugin Failed" err="Operation cannot be fulfilled on pods/binding \"busybox-fc5497c4f-tpqpw\": pod busybox-fc5497c4f-tpqpw is already assigned to node \"ha-044175-m02\"" plugin="DefaultBinder" pod="default/busybox-fc5497c4f-tpqpw" node="ha-044175-m02"
	E0805 23:13:36.101555       1 schedule_one.go:338] "scheduler cache ForgetPod failed" err="pod 0d6e0955-71b4-4790-89ab-452b0750a85d(default/busybox-fc5497c4f-tpqpw) wasn't assumed so cannot be forgotten" pod="default/busybox-fc5497c4f-tpqpw"
	E0805 23:13:36.102338       1 schedule_one.go:1046] "Error scheduling pod; retrying" err="running Bind plugin \"DefaultBinder\": Operation cannot be fulfilled on pods/binding \"busybox-fc5497c4f-tpqpw\": pod busybox-fc5497c4f-tpqpw is already assigned to node \"ha-044175-m02\"" pod="default/busybox-fc5497c4f-tpqpw"
	I0805 23:13:36.102510       1 schedule_one.go:1059] "Pod has been assigned to node. Abort adding it back to queue." pod="default/busybox-fc5497c4f-tpqpw" node="ha-044175-m02"
	E0805 23:14:13.759910       1 framework.go:1286] "Plugin Failed" err="Operation cannot be fulfilled on pods/binding \"kindnet-s9t2d\": pod kindnet-s9t2d is already assigned to node \"ha-044175-m04\"" plugin="DefaultBinder" pod="kube-system/kindnet-s9t2d" node="ha-044175-m04"
	E0805 23:14:13.760048       1 schedule_one.go:338] "scheduler cache ForgetPod failed" err="pod 59dff32f-9b2c-4cdd-b706-fabcab7bdc67(kube-system/kindnet-s9t2d) wasn't assumed so cannot be forgotten" pod="kube-system/kindnet-s9t2d"
	E0805 23:14:13.760073       1 schedule_one.go:1046] "Error scheduling pod; retrying" err="running Bind plugin \"DefaultBinder\": Operation cannot be fulfilled on pods/binding \"kindnet-s9t2d\": pod kindnet-s9t2d is already assigned to node \"ha-044175-m04\"" pod="kube-system/kindnet-s9t2d"
	I0805 23:14:13.760122       1 schedule_one.go:1059] "Pod has been assigned to node. Abort adding it back to queue." pod="kube-system/kindnet-s9t2d" node="ha-044175-m04"
	E0805 23:14:15.570618       1 framework.go:1286] "Plugin Failed" err="Operation cannot be fulfilled on pods/binding \"kindnet-s6qcf\": pod kindnet-s6qcf is already assigned to node \"ha-044175-m04\"" plugin="DefaultBinder" pod="kube-system/kindnet-s6qcf" node="ha-044175-m04"
	E0805 23:14:15.570740       1 schedule_one.go:1046] "Error scheduling pod; retrying" err="running Bind plugin \"DefaultBinder\": Operation cannot be fulfilled on pods/binding \"kindnet-s6qcf\": pod kindnet-s6qcf is already assigned to node \"ha-044175-m04\"" pod="kube-system/kindnet-s6qcf"
	
	
	==> kubelet <==
	Aug 05 23:13:44 ha-044175 kubelet[1390]: E0805 23:13:44.738483    1390 iptables.go:577] "Could not set up iptables canary" err=<
	Aug 05 23:13:44 ha-044175 kubelet[1390]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Aug 05 23:13:44 ha-044175 kubelet[1390]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Aug 05 23:13:44 ha-044175 kubelet[1390]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Aug 05 23:13:44 ha-044175 kubelet[1390]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Aug 05 23:14:44 ha-044175 kubelet[1390]: E0805 23:14:44.733769    1390 iptables.go:577] "Could not set up iptables canary" err=<
	Aug 05 23:14:44 ha-044175 kubelet[1390]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Aug 05 23:14:44 ha-044175 kubelet[1390]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Aug 05 23:14:44 ha-044175 kubelet[1390]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Aug 05 23:14:44 ha-044175 kubelet[1390]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Aug 05 23:15:44 ha-044175 kubelet[1390]: E0805 23:15:44.735270    1390 iptables.go:577] "Could not set up iptables canary" err=<
	Aug 05 23:15:44 ha-044175 kubelet[1390]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Aug 05 23:15:44 ha-044175 kubelet[1390]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Aug 05 23:15:44 ha-044175 kubelet[1390]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Aug 05 23:15:44 ha-044175 kubelet[1390]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Aug 05 23:16:44 ha-044175 kubelet[1390]: E0805 23:16:44.737234    1390 iptables.go:577] "Could not set up iptables canary" err=<
	Aug 05 23:16:44 ha-044175 kubelet[1390]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Aug 05 23:16:44 ha-044175 kubelet[1390]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Aug 05 23:16:44 ha-044175 kubelet[1390]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Aug 05 23:16:44 ha-044175 kubelet[1390]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Aug 05 23:17:44 ha-044175 kubelet[1390]: E0805 23:17:44.736647    1390 iptables.go:577] "Could not set up iptables canary" err=<
	Aug 05 23:17:44 ha-044175 kubelet[1390]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Aug 05 23:17:44 ha-044175 kubelet[1390]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Aug 05 23:17:44 ha-044175 kubelet[1390]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Aug 05 23:17:44 ha-044175 kubelet[1390]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p ha-044175 -n ha-044175
helpers_test.go:261: (dbg) Run:  kubectl --context ha-044175 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestMultiControlPlane/serial/RestartSecondaryNode FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
--- FAIL: TestMultiControlPlane/serial/RestartSecondaryNode (48.27s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartClusterKeepsNodes (429.18s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartClusterKeepsNodes
ha_test.go:456: (dbg) Run:  out/minikube-linux-amd64 node list -p ha-044175 -v=7 --alsologtostderr
ha_test.go:462: (dbg) Run:  out/minikube-linux-amd64 stop -p ha-044175 -v=7 --alsologtostderr
E0805 23:18:16.351298   16792 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19373-9606/.minikube/profiles/addons-435364/client.crt: no such file or directory
E0805 23:19:39.399863   16792 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19373-9606/.minikube/profiles/addons-435364/client.crt: no such file or directory
ha_test.go:462: (dbg) Non-zero exit: out/minikube-linux-amd64 stop -p ha-044175 -v=7 --alsologtostderr: exit status 82 (2m1.932161393s)

                                                
                                                
-- stdout --
	* Stopping node "ha-044175-m04"  ...
	* Stopping node "ha-044175-m03"  ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0805 23:18:09.311535   35521 out.go:291] Setting OutFile to fd 1 ...
	I0805 23:18:09.311673   35521 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0805 23:18:09.311685   35521 out.go:304] Setting ErrFile to fd 2...
	I0805 23:18:09.311691   35521 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0805 23:18:09.311890   35521 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19373-9606/.minikube/bin
	I0805 23:18:09.312140   35521 out.go:298] Setting JSON to false
	I0805 23:18:09.312225   35521 mustload.go:65] Loading cluster: ha-044175
	I0805 23:18:09.312577   35521 config.go:182] Loaded profile config "ha-044175": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0805 23:18:09.312670   35521 profile.go:143] Saving config to /home/jenkins/minikube-integration/19373-9606/.minikube/profiles/ha-044175/config.json ...
	I0805 23:18:09.312855   35521 mustload.go:65] Loading cluster: ha-044175
	I0805 23:18:09.312993   35521 config.go:182] Loaded profile config "ha-044175": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0805 23:18:09.313031   35521 stop.go:39] StopHost: ha-044175-m04
	I0805 23:18:09.313378   35521 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0805 23:18:09.313437   35521 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0805 23:18:09.328144   35521 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44217
	I0805 23:18:09.328573   35521 main.go:141] libmachine: () Calling .GetVersion
	I0805 23:18:09.329145   35521 main.go:141] libmachine: Using API Version  1
	I0805 23:18:09.329175   35521 main.go:141] libmachine: () Calling .SetConfigRaw
	I0805 23:18:09.329540   35521 main.go:141] libmachine: () Calling .GetMachineName
	I0805 23:18:09.332081   35521 out.go:177] * Stopping node "ha-044175-m04"  ...
	I0805 23:18:09.333538   35521 machine.go:157] backing up vm config to /var/lib/minikube/backup: [/etc/cni /etc/kubernetes]
	I0805 23:18:09.333578   35521 main.go:141] libmachine: (ha-044175-m04) Calling .DriverName
	I0805 23:18:09.333852   35521 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/backup
	I0805 23:18:09.333886   35521 main.go:141] libmachine: (ha-044175-m04) Calling .GetSSHHostname
	I0805 23:18:09.336522   35521 main.go:141] libmachine: (ha-044175-m04) DBG | domain ha-044175-m04 has defined MAC address 52:54:00:e5:ba:4d in network mk-ha-044175
	I0805 23:18:09.336978   35521 main.go:141] libmachine: (ha-044175-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e5:ba:4d", ip: ""} in network mk-ha-044175: {Iface:virbr1 ExpiryTime:2024-08-06 00:13:59 +0000 UTC Type:0 Mac:52:54:00:e5:ba:4d Iaid: IPaddr:192.168.39.228 Prefix:24 Hostname:ha-044175-m04 Clientid:01:52:54:00:e5:ba:4d}
	I0805 23:18:09.337012   35521 main.go:141] libmachine: (ha-044175-m04) DBG | domain ha-044175-m04 has defined IP address 192.168.39.228 and MAC address 52:54:00:e5:ba:4d in network mk-ha-044175
	I0805 23:18:09.337220   35521 main.go:141] libmachine: (ha-044175-m04) Calling .GetSSHPort
	I0805 23:18:09.337388   35521 main.go:141] libmachine: (ha-044175-m04) Calling .GetSSHKeyPath
	I0805 23:18:09.337580   35521 main.go:141] libmachine: (ha-044175-m04) Calling .GetSSHUsername
	I0805 23:18:09.337735   35521 sshutil.go:53] new ssh client: &{IP:192.168.39.228 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19373-9606/.minikube/machines/ha-044175-m04/id_rsa Username:docker}
	I0805 23:18:09.426649   35521 ssh_runner.go:195] Run: sudo rsync --archive --relative /etc/cni /var/lib/minikube/backup
	I0805 23:18:09.480961   35521 ssh_runner.go:195] Run: sudo rsync --archive --relative /etc/kubernetes /var/lib/minikube/backup
	I0805 23:18:09.534801   35521 main.go:141] libmachine: Stopping "ha-044175-m04"...
	I0805 23:18:09.534828   35521 main.go:141] libmachine: (ha-044175-m04) Calling .GetState
	I0805 23:18:09.536405   35521 main.go:141] libmachine: (ha-044175-m04) Calling .Stop
	I0805 23:18:09.539966   35521 main.go:141] libmachine: (ha-044175-m04) Waiting for machine to stop 0/120
	I0805 23:18:10.769496   35521 main.go:141] libmachine: (ha-044175-m04) Calling .GetState
	I0805 23:18:10.770890   35521 main.go:141] libmachine: Machine "ha-044175-m04" was stopped.
	I0805 23:18:10.770912   35521 stop.go:75] duration metric: took 1.437373428s to stop
	I0805 23:18:10.770936   35521 stop.go:39] StopHost: ha-044175-m03
	I0805 23:18:10.771349   35521 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0805 23:18:10.771398   35521 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0805 23:18:10.785879   35521 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44099
	I0805 23:18:10.786281   35521 main.go:141] libmachine: () Calling .GetVersion
	I0805 23:18:10.786764   35521 main.go:141] libmachine: Using API Version  1
	I0805 23:18:10.786783   35521 main.go:141] libmachine: () Calling .SetConfigRaw
	I0805 23:18:10.787151   35521 main.go:141] libmachine: () Calling .GetMachineName
	I0805 23:18:10.789326   35521 out.go:177] * Stopping node "ha-044175-m03"  ...
	I0805 23:18:10.790645   35521 machine.go:157] backing up vm config to /var/lib/minikube/backup: [/etc/cni /etc/kubernetes]
	I0805 23:18:10.790675   35521 main.go:141] libmachine: (ha-044175-m03) Calling .DriverName
	I0805 23:18:10.790876   35521 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/backup
	I0805 23:18:10.790896   35521 main.go:141] libmachine: (ha-044175-m03) Calling .GetSSHHostname
	I0805 23:18:10.793861   35521 main.go:141] libmachine: (ha-044175-m03) DBG | domain ha-044175-m03 has defined MAC address 52:54:00:f4:37:04 in network mk-ha-044175
	I0805 23:18:10.794262   35521 main.go:141] libmachine: (ha-044175-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f4:37:04", ip: ""} in network mk-ha-044175: {Iface:virbr1 ExpiryTime:2024-08-06 00:12:31 +0000 UTC Type:0 Mac:52:54:00:f4:37:04 Iaid: IPaddr:192.168.39.201 Prefix:24 Hostname:ha-044175-m03 Clientid:01:52:54:00:f4:37:04}
	I0805 23:18:10.794297   35521 main.go:141] libmachine: (ha-044175-m03) DBG | domain ha-044175-m03 has defined IP address 192.168.39.201 and MAC address 52:54:00:f4:37:04 in network mk-ha-044175
	I0805 23:18:10.794422   35521 main.go:141] libmachine: (ha-044175-m03) Calling .GetSSHPort
	I0805 23:18:10.794595   35521 main.go:141] libmachine: (ha-044175-m03) Calling .GetSSHKeyPath
	I0805 23:18:10.794755   35521 main.go:141] libmachine: (ha-044175-m03) Calling .GetSSHUsername
	I0805 23:18:10.794964   35521 sshutil.go:53] new ssh client: &{IP:192.168.39.201 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19373-9606/.minikube/machines/ha-044175-m03/id_rsa Username:docker}
	I0805 23:18:10.886661   35521 ssh_runner.go:195] Run: sudo rsync --archive --relative /etc/cni /var/lib/minikube/backup
	I0805 23:18:10.940616   35521 ssh_runner.go:195] Run: sudo rsync --archive --relative /etc/kubernetes /var/lib/minikube/backup
	I0805 23:18:10.995509   35521 main.go:141] libmachine: Stopping "ha-044175-m03"...
	I0805 23:18:10.995533   35521 main.go:141] libmachine: (ha-044175-m03) Calling .GetState
	I0805 23:18:10.997201   35521 main.go:141] libmachine: (ha-044175-m03) Calling .Stop
	I0805 23:18:11.000760   35521 main.go:141] libmachine: (ha-044175-m03) Waiting for machine to stop 0/120
	I0805 23:18:12.002304   35521 main.go:141] libmachine: (ha-044175-m03) Waiting for machine to stop 1/120
	I0805 23:18:13.003574   35521 main.go:141] libmachine: (ha-044175-m03) Waiting for machine to stop 2/120
	I0805 23:18:14.004764   35521 main.go:141] libmachine: (ha-044175-m03) Waiting for machine to stop 3/120
	I0805 23:18:15.006401   35521 main.go:141] libmachine: (ha-044175-m03) Waiting for machine to stop 4/120
	I0805 23:18:16.008660   35521 main.go:141] libmachine: (ha-044175-m03) Waiting for machine to stop 5/120
	I0805 23:18:17.010153   35521 main.go:141] libmachine: (ha-044175-m03) Waiting for machine to stop 6/120
	I0805 23:18:18.011777   35521 main.go:141] libmachine: (ha-044175-m03) Waiting for machine to stop 7/120
	I0805 23:18:19.013919   35521 main.go:141] libmachine: (ha-044175-m03) Waiting for machine to stop 8/120
	I0805 23:18:20.015296   35521 main.go:141] libmachine: (ha-044175-m03) Waiting for machine to stop 9/120
	I0805 23:18:21.017095   35521 main.go:141] libmachine: (ha-044175-m03) Waiting for machine to stop 10/120
	I0805 23:18:22.018846   35521 main.go:141] libmachine: (ha-044175-m03) Waiting for machine to stop 11/120
	I0805 23:18:23.020393   35521 main.go:141] libmachine: (ha-044175-m03) Waiting for machine to stop 12/120
	I0805 23:18:24.022130   35521 main.go:141] libmachine: (ha-044175-m03) Waiting for machine to stop 13/120
	I0805 23:18:25.023720   35521 main.go:141] libmachine: (ha-044175-m03) Waiting for machine to stop 14/120
	I0805 23:18:26.025881   35521 main.go:141] libmachine: (ha-044175-m03) Waiting for machine to stop 15/120
	I0805 23:18:27.027392   35521 main.go:141] libmachine: (ha-044175-m03) Waiting for machine to stop 16/120
	I0805 23:18:28.029115   35521 main.go:141] libmachine: (ha-044175-m03) Waiting for machine to stop 17/120
	I0805 23:18:29.030692   35521 main.go:141] libmachine: (ha-044175-m03) Waiting for machine to stop 18/120
	I0805 23:18:30.032381   35521 main.go:141] libmachine: (ha-044175-m03) Waiting for machine to stop 19/120
	I0805 23:18:31.034375   35521 main.go:141] libmachine: (ha-044175-m03) Waiting for machine to stop 20/120
	I0805 23:18:32.035942   35521 main.go:141] libmachine: (ha-044175-m03) Waiting for machine to stop 21/120
	I0805 23:18:33.037543   35521 main.go:141] libmachine: (ha-044175-m03) Waiting for machine to stop 22/120
	I0805 23:18:34.039119   35521 main.go:141] libmachine: (ha-044175-m03) Waiting for machine to stop 23/120
	I0805 23:18:35.040832   35521 main.go:141] libmachine: (ha-044175-m03) Waiting for machine to stop 24/120
	I0805 23:18:36.043238   35521 main.go:141] libmachine: (ha-044175-m03) Waiting for machine to stop 25/120
	I0805 23:18:37.044944   35521 main.go:141] libmachine: (ha-044175-m03) Waiting for machine to stop 26/120
	I0805 23:18:38.046358   35521 main.go:141] libmachine: (ha-044175-m03) Waiting for machine to stop 27/120
	I0805 23:18:39.048727   35521 main.go:141] libmachine: (ha-044175-m03) Waiting for machine to stop 28/120
	I0805 23:18:40.050206   35521 main.go:141] libmachine: (ha-044175-m03) Waiting for machine to stop 29/120
	I0805 23:18:41.052267   35521 main.go:141] libmachine: (ha-044175-m03) Waiting for machine to stop 30/120
	I0805 23:18:42.053700   35521 main.go:141] libmachine: (ha-044175-m03) Waiting for machine to stop 31/120
	I0805 23:18:43.054989   35521 main.go:141] libmachine: (ha-044175-m03) Waiting for machine to stop 32/120
	I0805 23:18:44.056662   35521 main.go:141] libmachine: (ha-044175-m03) Waiting for machine to stop 33/120
	I0805 23:18:45.058016   35521 main.go:141] libmachine: (ha-044175-m03) Waiting for machine to stop 34/120
	I0805 23:18:46.059918   35521 main.go:141] libmachine: (ha-044175-m03) Waiting for machine to stop 35/120
	I0805 23:18:47.062019   35521 main.go:141] libmachine: (ha-044175-m03) Waiting for machine to stop 36/120
	I0805 23:18:48.063568   35521 main.go:141] libmachine: (ha-044175-m03) Waiting for machine to stop 37/120
	I0805 23:18:49.065537   35521 main.go:141] libmachine: (ha-044175-m03) Waiting for machine to stop 38/120
	I0805 23:18:50.067014   35521 main.go:141] libmachine: (ha-044175-m03) Waiting for machine to stop 39/120
	I0805 23:18:51.068868   35521 main.go:141] libmachine: (ha-044175-m03) Waiting for machine to stop 40/120
	I0805 23:18:52.070810   35521 main.go:141] libmachine: (ha-044175-m03) Waiting for machine to stop 41/120
	I0805 23:18:53.072109   35521 main.go:141] libmachine: (ha-044175-m03) Waiting for machine to stop 42/120
	I0805 23:18:54.073538   35521 main.go:141] libmachine: (ha-044175-m03) Waiting for machine to stop 43/120
	I0805 23:18:55.074883   35521 main.go:141] libmachine: (ha-044175-m03) Waiting for machine to stop 44/120
	I0805 23:18:56.076748   35521 main.go:141] libmachine: (ha-044175-m03) Waiting for machine to stop 45/120
	I0805 23:18:57.078312   35521 main.go:141] libmachine: (ha-044175-m03) Waiting for machine to stop 46/120
	I0805 23:18:58.079844   35521 main.go:141] libmachine: (ha-044175-m03) Waiting for machine to stop 47/120
	I0805 23:18:59.081833   35521 main.go:141] libmachine: (ha-044175-m03) Waiting for machine to stop 48/120
	I0805 23:19:00.083342   35521 main.go:141] libmachine: (ha-044175-m03) Waiting for machine to stop 49/120
	I0805 23:19:01.085218   35521 main.go:141] libmachine: (ha-044175-m03) Waiting for machine to stop 50/120
	I0805 23:19:02.086589   35521 main.go:141] libmachine: (ha-044175-m03) Waiting for machine to stop 51/120
	I0805 23:19:03.088504   35521 main.go:141] libmachine: (ha-044175-m03) Waiting for machine to stop 52/120
	I0805 23:19:04.089970   35521 main.go:141] libmachine: (ha-044175-m03) Waiting for machine to stop 53/120
	I0805 23:19:05.091671   35521 main.go:141] libmachine: (ha-044175-m03) Waiting for machine to stop 54/120
	I0805 23:19:06.093195   35521 main.go:141] libmachine: (ha-044175-m03) Waiting for machine to stop 55/120
	I0805 23:19:07.095017   35521 main.go:141] libmachine: (ha-044175-m03) Waiting for machine to stop 56/120
	I0805 23:19:08.096533   35521 main.go:141] libmachine: (ha-044175-m03) Waiting for machine to stop 57/120
	I0805 23:19:09.098859   35521 main.go:141] libmachine: (ha-044175-m03) Waiting for machine to stop 58/120
	I0805 23:19:10.100314   35521 main.go:141] libmachine: (ha-044175-m03) Waiting for machine to stop 59/120
	I0805 23:19:11.102574   35521 main.go:141] libmachine: (ha-044175-m03) Waiting for machine to stop 60/120
	I0805 23:19:12.104053   35521 main.go:141] libmachine: (ha-044175-m03) Waiting for machine to stop 61/120
	I0805 23:19:13.105432   35521 main.go:141] libmachine: (ha-044175-m03) Waiting for machine to stop 62/120
	I0805 23:19:14.106722   35521 main.go:141] libmachine: (ha-044175-m03) Waiting for machine to stop 63/120
	I0805 23:19:15.108155   35521 main.go:141] libmachine: (ha-044175-m03) Waiting for machine to stop 64/120
	I0805 23:19:16.109945   35521 main.go:141] libmachine: (ha-044175-m03) Waiting for machine to stop 65/120
	I0805 23:19:17.111214   35521 main.go:141] libmachine: (ha-044175-m03) Waiting for machine to stop 66/120
	I0805 23:19:18.112600   35521 main.go:141] libmachine: (ha-044175-m03) Waiting for machine to stop 67/120
	I0805 23:19:19.114146   35521 main.go:141] libmachine: (ha-044175-m03) Waiting for machine to stop 68/120
	I0805 23:19:20.115521   35521 main.go:141] libmachine: (ha-044175-m03) Waiting for machine to stop 69/120
	I0805 23:19:21.116937   35521 main.go:141] libmachine: (ha-044175-m03) Waiting for machine to stop 70/120
	I0805 23:19:22.118336   35521 main.go:141] libmachine: (ha-044175-m03) Waiting for machine to stop 71/120
	I0805 23:19:23.119757   35521 main.go:141] libmachine: (ha-044175-m03) Waiting for machine to stop 72/120
	I0805 23:19:24.121643   35521 main.go:141] libmachine: (ha-044175-m03) Waiting for machine to stop 73/120
	I0805 23:19:25.123034   35521 main.go:141] libmachine: (ha-044175-m03) Waiting for machine to stop 74/120
	I0805 23:19:26.125337   35521 main.go:141] libmachine: (ha-044175-m03) Waiting for machine to stop 75/120
	I0805 23:19:27.127025   35521 main.go:141] libmachine: (ha-044175-m03) Waiting for machine to stop 76/120
	I0805 23:19:28.128436   35521 main.go:141] libmachine: (ha-044175-m03) Waiting for machine to stop 77/120
	I0805 23:19:29.130270   35521 main.go:141] libmachine: (ha-044175-m03) Waiting for machine to stop 78/120
	I0805 23:19:30.132002   35521 main.go:141] libmachine: (ha-044175-m03) Waiting for machine to stop 79/120
	I0805 23:19:31.133722   35521 main.go:141] libmachine: (ha-044175-m03) Waiting for machine to stop 80/120
	I0805 23:19:32.135391   35521 main.go:141] libmachine: (ha-044175-m03) Waiting for machine to stop 81/120
	I0805 23:19:33.136731   35521 main.go:141] libmachine: (ha-044175-m03) Waiting for machine to stop 82/120
	I0805 23:19:34.138012   35521 main.go:141] libmachine: (ha-044175-m03) Waiting for machine to stop 83/120
	I0805 23:19:35.139345   35521 main.go:141] libmachine: (ha-044175-m03) Waiting for machine to stop 84/120
	I0805 23:19:36.140817   35521 main.go:141] libmachine: (ha-044175-m03) Waiting for machine to stop 85/120
	I0805 23:19:37.142408   35521 main.go:141] libmachine: (ha-044175-m03) Waiting for machine to stop 86/120
	I0805 23:19:38.143853   35521 main.go:141] libmachine: (ha-044175-m03) Waiting for machine to stop 87/120
	I0805 23:19:39.145347   35521 main.go:141] libmachine: (ha-044175-m03) Waiting for machine to stop 88/120
	I0805 23:19:40.146686   35521 main.go:141] libmachine: (ha-044175-m03) Waiting for machine to stop 89/120
	I0805 23:19:41.148457   35521 main.go:141] libmachine: (ha-044175-m03) Waiting for machine to stop 90/120
	I0805 23:19:42.150062   35521 main.go:141] libmachine: (ha-044175-m03) Waiting for machine to stop 91/120
	I0805 23:19:43.151602   35521 main.go:141] libmachine: (ha-044175-m03) Waiting for machine to stop 92/120
	I0805 23:19:44.153864   35521 main.go:141] libmachine: (ha-044175-m03) Waiting for machine to stop 93/120
	I0805 23:19:45.155137   35521 main.go:141] libmachine: (ha-044175-m03) Waiting for machine to stop 94/120
	I0805 23:19:46.156539   35521 main.go:141] libmachine: (ha-044175-m03) Waiting for machine to stop 95/120
	I0805 23:19:47.157702   35521 main.go:141] libmachine: (ha-044175-m03) Waiting for machine to stop 96/120
	I0805 23:19:48.159090   35521 main.go:141] libmachine: (ha-044175-m03) Waiting for machine to stop 97/120
	I0805 23:19:49.160557   35521 main.go:141] libmachine: (ha-044175-m03) Waiting for machine to stop 98/120
	I0805 23:19:50.161998   35521 main.go:141] libmachine: (ha-044175-m03) Waiting for machine to stop 99/120
	I0805 23:19:51.163468   35521 main.go:141] libmachine: (ha-044175-m03) Waiting for machine to stop 100/120
	I0805 23:19:52.165196   35521 main.go:141] libmachine: (ha-044175-m03) Waiting for machine to stop 101/120
	I0805 23:19:53.166583   35521 main.go:141] libmachine: (ha-044175-m03) Waiting for machine to stop 102/120
	I0805 23:19:54.168130   35521 main.go:141] libmachine: (ha-044175-m03) Waiting for machine to stop 103/120
	I0805 23:19:55.169436   35521 main.go:141] libmachine: (ha-044175-m03) Waiting for machine to stop 104/120
	I0805 23:19:56.171092   35521 main.go:141] libmachine: (ha-044175-m03) Waiting for machine to stop 105/120
	I0805 23:19:57.172548   35521 main.go:141] libmachine: (ha-044175-m03) Waiting for machine to stop 106/120
	I0805 23:19:58.174085   35521 main.go:141] libmachine: (ha-044175-m03) Waiting for machine to stop 107/120
	I0805 23:19:59.175374   35521 main.go:141] libmachine: (ha-044175-m03) Waiting for machine to stop 108/120
	I0805 23:20:00.176683   35521 main.go:141] libmachine: (ha-044175-m03) Waiting for machine to stop 109/120
	I0805 23:20:01.178380   35521 main.go:141] libmachine: (ha-044175-m03) Waiting for machine to stop 110/120
	I0805 23:20:02.180108   35521 main.go:141] libmachine: (ha-044175-m03) Waiting for machine to stop 111/120
	I0805 23:20:03.181645   35521 main.go:141] libmachine: (ha-044175-m03) Waiting for machine to stop 112/120
	I0805 23:20:04.182960   35521 main.go:141] libmachine: (ha-044175-m03) Waiting for machine to stop 113/120
	I0805 23:20:05.184378   35521 main.go:141] libmachine: (ha-044175-m03) Waiting for machine to stop 114/120
	I0805 23:20:06.185696   35521 main.go:141] libmachine: (ha-044175-m03) Waiting for machine to stop 115/120
	I0805 23:20:07.186989   35521 main.go:141] libmachine: (ha-044175-m03) Waiting for machine to stop 116/120
	I0805 23:20:08.189140   35521 main.go:141] libmachine: (ha-044175-m03) Waiting for machine to stop 117/120
	I0805 23:20:09.190406   35521 main.go:141] libmachine: (ha-044175-m03) Waiting for machine to stop 118/120
	I0805 23:20:10.191942   35521 main.go:141] libmachine: (ha-044175-m03) Waiting for machine to stop 119/120
	I0805 23:20:11.192482   35521 stop.go:66] stop err: unable to stop vm, current state "Running"
	W0805 23:20:11.192522   35521 stop.go:165] stop host returned error: Temporary Error: stop: unable to stop vm, current state "Running"
	I0805 23:20:11.194759   35521 out.go:177] 
	W0805 23:20:11.196215   35521 out.go:239] X Exiting due to GUEST_STOP_TIMEOUT: Unable to stop VM: Temporary Error: stop: unable to stop vm, current state "Running"
	X Exiting due to GUEST_STOP_TIMEOUT: Unable to stop VM: Temporary Error: stop: unable to stop vm, current state "Running"
	W0805 23:20:11.196229   35521 out.go:239] * 
	* 
	W0805 23:20:11.198401   35521 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_stop_24ef9d461bcadf056806ccb9bba8d5f9f54754a6_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_stop_24ef9d461bcadf056806ccb9bba8d5f9f54754a6_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0805 23:20:11.200466   35521 out.go:177] 

                                                
                                                
** /stderr **
ha_test.go:464: failed to run minikube stop. args "out/minikube-linux-amd64 node list -p ha-044175 -v=7 --alsologtostderr" : exit status 82
ha_test.go:467: (dbg) Run:  out/minikube-linux-amd64 start -p ha-044175 --wait=true -v=7 --alsologtostderr
E0805 23:21:49.981774   16792 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19373-9606/.minikube/profiles/functional-299463/client.crt: no such file or directory
E0805 23:23:16.351802   16792 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19373-9606/.minikube/profiles/addons-435364/client.crt: no such file or directory
ha_test.go:467: (dbg) Done: out/minikube-linux-amd64 start -p ha-044175 --wait=true -v=7 --alsologtostderr: (5m4.439479297s)
ha_test.go:472: (dbg) Run:  out/minikube-linux-amd64 node list -p ha-044175
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p ha-044175 -n ha-044175
helpers_test.go:244: <<< TestMultiControlPlane/serial/RestartClusterKeepsNodes FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestMultiControlPlane/serial/RestartClusterKeepsNodes]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p ha-044175 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p ha-044175 logs -n 25: (2.007272351s)
helpers_test.go:252: TestMultiControlPlane/serial/RestartClusterKeepsNodes logs: 
-- stdout --
	
	==> Audit <==
	|---------|----------------------------------------------------------------------------------|-----------|---------|---------|---------------------|---------------------|
	| Command |                                       Args                                       |  Profile  |  User   | Version |     Start Time      |      End Time       |
	|---------|----------------------------------------------------------------------------------|-----------|---------|---------|---------------------|---------------------|
	| cp      | ha-044175 cp ha-044175-m03:/home/docker/cp-test.txt                              | ha-044175 | jenkins | v1.33.1 | 05 Aug 24 23:14 UTC | 05 Aug 24 23:14 UTC |
	|         | ha-044175-m02:/home/docker/cp-test_ha-044175-m03_ha-044175-m02.txt               |           |         |         |                     |                     |
	| ssh     | ha-044175 ssh -n                                                                 | ha-044175 | jenkins | v1.33.1 | 05 Aug 24 23:14 UTC | 05 Aug 24 23:14 UTC |
	|         | ha-044175-m03 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-044175 ssh -n ha-044175-m02 sudo cat                                          | ha-044175 | jenkins | v1.33.1 | 05 Aug 24 23:14 UTC | 05 Aug 24 23:14 UTC |
	|         | /home/docker/cp-test_ha-044175-m03_ha-044175-m02.txt                             |           |         |         |                     |                     |
	| cp      | ha-044175 cp ha-044175-m03:/home/docker/cp-test.txt                              | ha-044175 | jenkins | v1.33.1 | 05 Aug 24 23:14 UTC | 05 Aug 24 23:14 UTC |
	|         | ha-044175-m04:/home/docker/cp-test_ha-044175-m03_ha-044175-m04.txt               |           |         |         |                     |                     |
	| ssh     | ha-044175 ssh -n                                                                 | ha-044175 | jenkins | v1.33.1 | 05 Aug 24 23:14 UTC | 05 Aug 24 23:14 UTC |
	|         | ha-044175-m03 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-044175 ssh -n ha-044175-m04 sudo cat                                          | ha-044175 | jenkins | v1.33.1 | 05 Aug 24 23:14 UTC | 05 Aug 24 23:14 UTC |
	|         | /home/docker/cp-test_ha-044175-m03_ha-044175-m04.txt                             |           |         |         |                     |                     |
	| cp      | ha-044175 cp testdata/cp-test.txt                                                | ha-044175 | jenkins | v1.33.1 | 05 Aug 24 23:14 UTC | 05 Aug 24 23:14 UTC |
	|         | ha-044175-m04:/home/docker/cp-test.txt                                           |           |         |         |                     |                     |
	| ssh     | ha-044175 ssh -n                                                                 | ha-044175 | jenkins | v1.33.1 | 05 Aug 24 23:14 UTC | 05 Aug 24 23:14 UTC |
	|         | ha-044175-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| cp      | ha-044175 cp ha-044175-m04:/home/docker/cp-test.txt                              | ha-044175 | jenkins | v1.33.1 | 05 Aug 24 23:14 UTC | 05 Aug 24 23:14 UTC |
	|         | /tmp/TestMultiControlPlaneserialCopyFile3481107746/001/cp-test_ha-044175-m04.txt |           |         |         |                     |                     |
	| ssh     | ha-044175 ssh -n                                                                 | ha-044175 | jenkins | v1.33.1 | 05 Aug 24 23:14 UTC | 05 Aug 24 23:14 UTC |
	|         | ha-044175-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| cp      | ha-044175 cp ha-044175-m04:/home/docker/cp-test.txt                              | ha-044175 | jenkins | v1.33.1 | 05 Aug 24 23:14 UTC | 05 Aug 24 23:14 UTC |
	|         | ha-044175:/home/docker/cp-test_ha-044175-m04_ha-044175.txt                       |           |         |         |                     |                     |
	| ssh     | ha-044175 ssh -n                                                                 | ha-044175 | jenkins | v1.33.1 | 05 Aug 24 23:14 UTC | 05 Aug 24 23:14 UTC |
	|         | ha-044175-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-044175 ssh -n ha-044175 sudo cat                                              | ha-044175 | jenkins | v1.33.1 | 05 Aug 24 23:14 UTC | 05 Aug 24 23:14 UTC |
	|         | /home/docker/cp-test_ha-044175-m04_ha-044175.txt                                 |           |         |         |                     |                     |
	| cp      | ha-044175 cp ha-044175-m04:/home/docker/cp-test.txt                              | ha-044175 | jenkins | v1.33.1 | 05 Aug 24 23:14 UTC | 05 Aug 24 23:14 UTC |
	|         | ha-044175-m02:/home/docker/cp-test_ha-044175-m04_ha-044175-m02.txt               |           |         |         |                     |                     |
	| ssh     | ha-044175 ssh -n                                                                 | ha-044175 | jenkins | v1.33.1 | 05 Aug 24 23:14 UTC | 05 Aug 24 23:14 UTC |
	|         | ha-044175-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-044175 ssh -n ha-044175-m02 sudo cat                                          | ha-044175 | jenkins | v1.33.1 | 05 Aug 24 23:14 UTC | 05 Aug 24 23:14 UTC |
	|         | /home/docker/cp-test_ha-044175-m04_ha-044175-m02.txt                             |           |         |         |                     |                     |
	| cp      | ha-044175 cp ha-044175-m04:/home/docker/cp-test.txt                              | ha-044175 | jenkins | v1.33.1 | 05 Aug 24 23:14 UTC | 05 Aug 24 23:14 UTC |
	|         | ha-044175-m03:/home/docker/cp-test_ha-044175-m04_ha-044175-m03.txt               |           |         |         |                     |                     |
	| ssh     | ha-044175 ssh -n                                                                 | ha-044175 | jenkins | v1.33.1 | 05 Aug 24 23:14 UTC | 05 Aug 24 23:14 UTC |
	|         | ha-044175-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-044175 ssh -n ha-044175-m03 sudo cat                                          | ha-044175 | jenkins | v1.33.1 | 05 Aug 24 23:14 UTC | 05 Aug 24 23:14 UTC |
	|         | /home/docker/cp-test_ha-044175-m04_ha-044175-m03.txt                             |           |         |         |                     |                     |
	| node    | ha-044175 node stop m02 -v=7                                                     | ha-044175 | jenkins | v1.33.1 | 05 Aug 24 23:14 UTC |                     |
	|         | --alsologtostderr                                                                |           |         |         |                     |                     |
	| node    | ha-044175 node start m02 -v=7                                                    | ha-044175 | jenkins | v1.33.1 | 05 Aug 24 23:17 UTC |                     |
	|         | --alsologtostderr                                                                |           |         |         |                     |                     |
	| node    | list -p ha-044175 -v=7                                                           | ha-044175 | jenkins | v1.33.1 | 05 Aug 24 23:18 UTC |                     |
	|         | --alsologtostderr                                                                |           |         |         |                     |                     |
	| stop    | -p ha-044175 -v=7                                                                | ha-044175 | jenkins | v1.33.1 | 05 Aug 24 23:18 UTC |                     |
	|         | --alsologtostderr                                                                |           |         |         |                     |                     |
	| start   | -p ha-044175 --wait=true -v=7                                                    | ha-044175 | jenkins | v1.33.1 | 05 Aug 24 23:20 UTC | 05 Aug 24 23:25 UTC |
	|         | --alsologtostderr                                                                |           |         |         |                     |                     |
	| node    | list -p ha-044175                                                                | ha-044175 | jenkins | v1.33.1 | 05 Aug 24 23:25 UTC |                     |
	|---------|----------------------------------------------------------------------------------|-----------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/08/05 23:20:11
	Running on machine: ubuntu-20-agent-10
	Binary: Built with gc go1.22.5 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0805 23:20:11.244663   36023 out.go:291] Setting OutFile to fd 1 ...
	I0805 23:20:11.244760   36023 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0805 23:20:11.244768   36023 out.go:304] Setting ErrFile to fd 2...
	I0805 23:20:11.244772   36023 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0805 23:20:11.244977   36023 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19373-9606/.minikube/bin
	I0805 23:20:11.245505   36023 out.go:298] Setting JSON to false
	I0805 23:20:11.246396   36023 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-10","uptime":3757,"bootTime":1722896254,"procs":186,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1065-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0805 23:20:11.246452   36023 start.go:139] virtualization: kvm guest
	I0805 23:20:11.248666   36023 out.go:177] * [ha-044175] minikube v1.33.1 on Ubuntu 20.04 (kvm/amd64)
	I0805 23:20:11.250176   36023 notify.go:220] Checking for updates...
	I0805 23:20:11.250186   36023 out.go:177]   - MINIKUBE_LOCATION=19373
	I0805 23:20:11.251701   36023 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0805 23:20:11.253248   36023 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19373-9606/kubeconfig
	I0805 23:20:11.254509   36023 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19373-9606/.minikube
	I0805 23:20:11.255648   36023 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0805 23:20:11.256694   36023 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0805 23:20:11.258173   36023 config.go:182] Loaded profile config "ha-044175": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0805 23:20:11.258262   36023 driver.go:392] Setting default libvirt URI to qemu:///system
	I0805 23:20:11.258795   36023 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0805 23:20:11.258871   36023 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0805 23:20:11.275140   36023 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46293
	I0805 23:20:11.275509   36023 main.go:141] libmachine: () Calling .GetVersion
	I0805 23:20:11.276277   36023 main.go:141] libmachine: Using API Version  1
	I0805 23:20:11.276304   36023 main.go:141] libmachine: () Calling .SetConfigRaw
	I0805 23:20:11.276594   36023 main.go:141] libmachine: () Calling .GetMachineName
	I0805 23:20:11.276754   36023 main.go:141] libmachine: (ha-044175) Calling .DriverName
	I0805 23:20:11.312308   36023 out.go:177] * Using the kvm2 driver based on existing profile
	I0805 23:20:11.313540   36023 start.go:297] selected driver: kvm2
	I0805 23:20:11.313559   36023 start.go:901] validating driver "kvm2" against &{Name:ha-044175 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19339/minikube-v1.33.1-1722248113-19339-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVer
sion:v1.30.3 ClusterName:ha-044175 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.57 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.112 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m03 IP:192.168.39.201 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m04 IP:192.168.39.228 Port:0 KubernetesVersion:v1.30.3 ContainerRuntime: ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false ef
k:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9
p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0805 23:20:11.313716   36023 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0805 23:20:11.314047   36023 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0805 23:20:11.314117   36023 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/19373-9606/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0805 23:20:11.328722   36023 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.33.1
	I0805 23:20:11.329453   36023 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0805 23:20:11.329481   36023 cni.go:84] Creating CNI manager for ""
	I0805 23:20:11.329488   36023 cni.go:136] multinode detected (4 nodes found), recommending kindnet
	I0805 23:20:11.329551   36023 start.go:340] cluster config:
	{Name:ha-044175 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19339/minikube-v1.33.1-1722248113-19339-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.3 ClusterName:ha-044175 Namespace:default APIServerHAVIP:192.168.39
.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.57 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.112 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m03 IP:192.168.39.201 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m04 IP:192.168.39.228 Port:0 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-til
ler:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPo
rt:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0805 23:20:11.329722   36023 iso.go:125] acquiring lock: {Name:mk54a637ed625e04bb2b6adf973b61c976cd6d35 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0805 23:20:11.331514   36023 out.go:177] * Starting "ha-044175" primary control-plane node in "ha-044175" cluster
	I0805 23:20:11.332803   36023 preload.go:131] Checking if preload exists for k8s version v1.30.3 and runtime crio
	I0805 23:20:11.332846   36023 preload.go:146] Found local preload: /home/jenkins/minikube-integration/19373-9606/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-cri-o-overlay-amd64.tar.lz4
	I0805 23:20:11.332858   36023 cache.go:56] Caching tarball of preloaded images
	I0805 23:20:11.332935   36023 preload.go:172] Found /home/jenkins/minikube-integration/19373-9606/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0805 23:20:11.332949   36023 cache.go:59] Finished verifying existence of preloaded tar for v1.30.3 on crio
	I0805 23:20:11.333069   36023 profile.go:143] Saving config to /home/jenkins/minikube-integration/19373-9606/.minikube/profiles/ha-044175/config.json ...
	I0805 23:20:11.333250   36023 start.go:360] acquireMachinesLock for ha-044175: {Name:mkd2ba511c39504598222edbf83078b718329186 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0805 23:20:11.333299   36023 start.go:364] duration metric: took 31.481µs to acquireMachinesLock for "ha-044175"
	I0805 23:20:11.333318   36023 start.go:96] Skipping create...Using existing machine configuration
	I0805 23:20:11.333327   36023 fix.go:54] fixHost starting: 
	I0805 23:20:11.333571   36023 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0805 23:20:11.333607   36023 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0805 23:20:11.347610   36023 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:32929
	I0805 23:20:11.348065   36023 main.go:141] libmachine: () Calling .GetVersion
	I0805 23:20:11.348630   36023 main.go:141] libmachine: Using API Version  1
	I0805 23:20:11.348668   36023 main.go:141] libmachine: () Calling .SetConfigRaw
	I0805 23:20:11.348959   36023 main.go:141] libmachine: () Calling .GetMachineName
	I0805 23:20:11.349162   36023 main.go:141] libmachine: (ha-044175) Calling .DriverName
	I0805 23:20:11.349310   36023 main.go:141] libmachine: (ha-044175) Calling .GetState
	I0805 23:20:11.351111   36023 fix.go:112] recreateIfNeeded on ha-044175: state=Running err=<nil>
	W0805 23:20:11.351133   36023 fix.go:138] unexpected machine state, will restart: <nil>
	I0805 23:20:11.353615   36023 out.go:177] * Updating the running kvm2 "ha-044175" VM ...
	I0805 23:20:11.355215   36023 machine.go:94] provisionDockerMachine start ...
	I0805 23:20:11.355243   36023 main.go:141] libmachine: (ha-044175) Calling .DriverName
	I0805 23:20:11.355532   36023 main.go:141] libmachine: (ha-044175) Calling .GetSSHHostname
	I0805 23:20:11.358123   36023 main.go:141] libmachine: (ha-044175) DBG | domain ha-044175 has defined MAC address 52:54:00:d0:5f:e4 in network mk-ha-044175
	I0805 23:20:11.358605   36023 main.go:141] libmachine: (ha-044175) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d0:5f:e4", ip: ""} in network mk-ha-044175: {Iface:virbr1 ExpiryTime:2024-08-06 00:10:15 +0000 UTC Type:0 Mac:52:54:00:d0:5f:e4 Iaid: IPaddr:192.168.39.57 Prefix:24 Hostname:ha-044175 Clientid:01:52:54:00:d0:5f:e4}
	I0805 23:20:11.358644   36023 main.go:141] libmachine: (ha-044175) DBG | domain ha-044175 has defined IP address 192.168.39.57 and MAC address 52:54:00:d0:5f:e4 in network mk-ha-044175
	I0805 23:20:11.358761   36023 main.go:141] libmachine: (ha-044175) Calling .GetSSHPort
	I0805 23:20:11.358938   36023 main.go:141] libmachine: (ha-044175) Calling .GetSSHKeyPath
	I0805 23:20:11.359099   36023 main.go:141] libmachine: (ha-044175) Calling .GetSSHKeyPath
	I0805 23:20:11.359250   36023 main.go:141] libmachine: (ha-044175) Calling .GetSSHUsername
	I0805 23:20:11.359406   36023 main.go:141] libmachine: Using SSH client type: native
	I0805 23:20:11.359582   36023 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.57 22 <nil> <nil>}
	I0805 23:20:11.359592   36023 main.go:141] libmachine: About to run SSH command:
	hostname
	I0805 23:20:11.464563   36023 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-044175
	
	I0805 23:20:11.464593   36023 main.go:141] libmachine: (ha-044175) Calling .GetMachineName
	I0805 23:20:11.464871   36023 buildroot.go:166] provisioning hostname "ha-044175"
	I0805 23:20:11.464902   36023 main.go:141] libmachine: (ha-044175) Calling .GetMachineName
	I0805 23:20:11.465139   36023 main.go:141] libmachine: (ha-044175) Calling .GetSSHHostname
	I0805 23:20:11.467742   36023 main.go:141] libmachine: (ha-044175) DBG | domain ha-044175 has defined MAC address 52:54:00:d0:5f:e4 in network mk-ha-044175
	I0805 23:20:11.468117   36023 main.go:141] libmachine: (ha-044175) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d0:5f:e4", ip: ""} in network mk-ha-044175: {Iface:virbr1 ExpiryTime:2024-08-06 00:10:15 +0000 UTC Type:0 Mac:52:54:00:d0:5f:e4 Iaid: IPaddr:192.168.39.57 Prefix:24 Hostname:ha-044175 Clientid:01:52:54:00:d0:5f:e4}
	I0805 23:20:11.468141   36023 main.go:141] libmachine: (ha-044175) DBG | domain ha-044175 has defined IP address 192.168.39.57 and MAC address 52:54:00:d0:5f:e4 in network mk-ha-044175
	I0805 23:20:11.468296   36023 main.go:141] libmachine: (ha-044175) Calling .GetSSHPort
	I0805 23:20:11.468477   36023 main.go:141] libmachine: (ha-044175) Calling .GetSSHKeyPath
	I0805 23:20:11.468635   36023 main.go:141] libmachine: (ha-044175) Calling .GetSSHKeyPath
	I0805 23:20:11.468759   36023 main.go:141] libmachine: (ha-044175) Calling .GetSSHUsername
	I0805 23:20:11.468928   36023 main.go:141] libmachine: Using SSH client type: native
	I0805 23:20:11.469084   36023 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.57 22 <nil> <nil>}
	I0805 23:20:11.469094   36023 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-044175 && echo "ha-044175" | sudo tee /etc/hostname
	I0805 23:20:11.584014   36023 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-044175
	
	I0805 23:20:11.584043   36023 main.go:141] libmachine: (ha-044175) Calling .GetSSHHostname
	I0805 23:20:11.587100   36023 main.go:141] libmachine: (ha-044175) DBG | domain ha-044175 has defined MAC address 52:54:00:d0:5f:e4 in network mk-ha-044175
	I0805 23:20:11.587536   36023 main.go:141] libmachine: (ha-044175) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d0:5f:e4", ip: ""} in network mk-ha-044175: {Iface:virbr1 ExpiryTime:2024-08-06 00:10:15 +0000 UTC Type:0 Mac:52:54:00:d0:5f:e4 Iaid: IPaddr:192.168.39.57 Prefix:24 Hostname:ha-044175 Clientid:01:52:54:00:d0:5f:e4}
	I0805 23:20:11.587563   36023 main.go:141] libmachine: (ha-044175) DBG | domain ha-044175 has defined IP address 192.168.39.57 and MAC address 52:54:00:d0:5f:e4 in network mk-ha-044175
	I0805 23:20:11.587758   36023 main.go:141] libmachine: (ha-044175) Calling .GetSSHPort
	I0805 23:20:11.587930   36023 main.go:141] libmachine: (ha-044175) Calling .GetSSHKeyPath
	I0805 23:20:11.588098   36023 main.go:141] libmachine: (ha-044175) Calling .GetSSHKeyPath
	I0805 23:20:11.588219   36023 main.go:141] libmachine: (ha-044175) Calling .GetSSHUsername
	I0805 23:20:11.588360   36023 main.go:141] libmachine: Using SSH client type: native
	I0805 23:20:11.588509   36023 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.57 22 <nil> <nil>}
	I0805 23:20:11.588523   36023 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-044175' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-044175/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-044175' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0805 23:20:11.692374   36023 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0805 23:20:11.692411   36023 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19373-9606/.minikube CaCertPath:/home/jenkins/minikube-integration/19373-9606/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19373-9606/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19373-9606/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19373-9606/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19373-9606/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19373-9606/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19373-9606/.minikube}
	I0805 23:20:11.692458   36023 buildroot.go:174] setting up certificates
	I0805 23:20:11.692474   36023 provision.go:84] configureAuth start
	I0805 23:20:11.692492   36023 main.go:141] libmachine: (ha-044175) Calling .GetMachineName
	I0805 23:20:11.692736   36023 main.go:141] libmachine: (ha-044175) Calling .GetIP
	I0805 23:20:11.695532   36023 main.go:141] libmachine: (ha-044175) DBG | domain ha-044175 has defined MAC address 52:54:00:d0:5f:e4 in network mk-ha-044175
	I0805 23:20:11.695910   36023 main.go:141] libmachine: (ha-044175) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d0:5f:e4", ip: ""} in network mk-ha-044175: {Iface:virbr1 ExpiryTime:2024-08-06 00:10:15 +0000 UTC Type:0 Mac:52:54:00:d0:5f:e4 Iaid: IPaddr:192.168.39.57 Prefix:24 Hostname:ha-044175 Clientid:01:52:54:00:d0:5f:e4}
	I0805 23:20:11.695943   36023 main.go:141] libmachine: (ha-044175) DBG | domain ha-044175 has defined IP address 192.168.39.57 and MAC address 52:54:00:d0:5f:e4 in network mk-ha-044175
	I0805 23:20:11.696149   36023 main.go:141] libmachine: (ha-044175) Calling .GetSSHHostname
	I0805 23:20:11.698258   36023 main.go:141] libmachine: (ha-044175) DBG | domain ha-044175 has defined MAC address 52:54:00:d0:5f:e4 in network mk-ha-044175
	I0805 23:20:11.698677   36023 main.go:141] libmachine: (ha-044175) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d0:5f:e4", ip: ""} in network mk-ha-044175: {Iface:virbr1 ExpiryTime:2024-08-06 00:10:15 +0000 UTC Type:0 Mac:52:54:00:d0:5f:e4 Iaid: IPaddr:192.168.39.57 Prefix:24 Hostname:ha-044175 Clientid:01:52:54:00:d0:5f:e4}
	I0805 23:20:11.698701   36023 main.go:141] libmachine: (ha-044175) DBG | domain ha-044175 has defined IP address 192.168.39.57 and MAC address 52:54:00:d0:5f:e4 in network mk-ha-044175
	I0805 23:20:11.698744   36023 provision.go:143] copyHostCerts
	I0805 23:20:11.698772   36023 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19373-9606/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/19373-9606/.minikube/cert.pem
	I0805 23:20:11.698808   36023 exec_runner.go:144] found /home/jenkins/minikube-integration/19373-9606/.minikube/cert.pem, removing ...
	I0805 23:20:11.698818   36023 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19373-9606/.minikube/cert.pem
	I0805 23:20:11.698904   36023 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19373-9606/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19373-9606/.minikube/cert.pem (1123 bytes)
	I0805 23:20:11.699000   36023 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19373-9606/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/19373-9606/.minikube/key.pem
	I0805 23:20:11.699035   36023 exec_runner.go:144] found /home/jenkins/minikube-integration/19373-9606/.minikube/key.pem, removing ...
	I0805 23:20:11.699041   36023 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19373-9606/.minikube/key.pem
	I0805 23:20:11.699089   36023 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19373-9606/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19373-9606/.minikube/key.pem (1679 bytes)
	I0805 23:20:11.699151   36023 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19373-9606/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/19373-9606/.minikube/ca.pem
	I0805 23:20:11.699172   36023 exec_runner.go:144] found /home/jenkins/minikube-integration/19373-9606/.minikube/ca.pem, removing ...
	I0805 23:20:11.699181   36023 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19373-9606/.minikube/ca.pem
	I0805 23:20:11.699215   36023 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19373-9606/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19373-9606/.minikube/ca.pem (1082 bytes)
	I0805 23:20:11.699277   36023 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19373-9606/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19373-9606/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19373-9606/.minikube/certs/ca-key.pem org=jenkins.ha-044175 san=[127.0.0.1 192.168.39.57 ha-044175 localhost minikube]
	I0805 23:20:11.801111   36023 provision.go:177] copyRemoteCerts
	I0805 23:20:11.801163   36023 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0805 23:20:11.801182   36023 main.go:141] libmachine: (ha-044175) Calling .GetSSHHostname
	I0805 23:20:11.804141   36023 main.go:141] libmachine: (ha-044175) DBG | domain ha-044175 has defined MAC address 52:54:00:d0:5f:e4 in network mk-ha-044175
	I0805 23:20:11.804513   36023 main.go:141] libmachine: (ha-044175) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d0:5f:e4", ip: ""} in network mk-ha-044175: {Iface:virbr1 ExpiryTime:2024-08-06 00:10:15 +0000 UTC Type:0 Mac:52:54:00:d0:5f:e4 Iaid: IPaddr:192.168.39.57 Prefix:24 Hostname:ha-044175 Clientid:01:52:54:00:d0:5f:e4}
	I0805 23:20:11.804541   36023 main.go:141] libmachine: (ha-044175) DBG | domain ha-044175 has defined IP address 192.168.39.57 and MAC address 52:54:00:d0:5f:e4 in network mk-ha-044175
	I0805 23:20:11.804763   36023 main.go:141] libmachine: (ha-044175) Calling .GetSSHPort
	I0805 23:20:11.804985   36023 main.go:141] libmachine: (ha-044175) Calling .GetSSHKeyPath
	I0805 23:20:11.805221   36023 main.go:141] libmachine: (ha-044175) Calling .GetSSHUsername
	I0805 23:20:11.805391   36023 sshutil.go:53] new ssh client: &{IP:192.168.39.57 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19373-9606/.minikube/machines/ha-044175/id_rsa Username:docker}
	I0805 23:20:11.886941   36023 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19373-9606/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0805 23:20:11.887017   36023 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19373-9606/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0805 23:20:11.915345   36023 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19373-9606/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0805 23:20:11.915417   36023 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19373-9606/.minikube/machines/server.pem --> /etc/docker/server.pem (1200 bytes)
	I0805 23:20:11.940652   36023 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19373-9606/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0805 23:20:11.940719   36023 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19373-9606/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0805 23:20:11.967966   36023 provision.go:87] duration metric: took 275.477847ms to configureAuth
	I0805 23:20:11.967992   36023 buildroot.go:189] setting minikube options for container-runtime
	I0805 23:20:11.968200   36023 config.go:182] Loaded profile config "ha-044175": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0805 23:20:11.968270   36023 main.go:141] libmachine: (ha-044175) Calling .GetSSHHostname
	I0805 23:20:11.970923   36023 main.go:141] libmachine: (ha-044175) DBG | domain ha-044175 has defined MAC address 52:54:00:d0:5f:e4 in network mk-ha-044175
	I0805 23:20:11.971301   36023 main.go:141] libmachine: (ha-044175) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d0:5f:e4", ip: ""} in network mk-ha-044175: {Iface:virbr1 ExpiryTime:2024-08-06 00:10:15 +0000 UTC Type:0 Mac:52:54:00:d0:5f:e4 Iaid: IPaddr:192.168.39.57 Prefix:24 Hostname:ha-044175 Clientid:01:52:54:00:d0:5f:e4}
	I0805 23:20:11.971325   36023 main.go:141] libmachine: (ha-044175) DBG | domain ha-044175 has defined IP address 192.168.39.57 and MAC address 52:54:00:d0:5f:e4 in network mk-ha-044175
	I0805 23:20:11.971493   36023 main.go:141] libmachine: (ha-044175) Calling .GetSSHPort
	I0805 23:20:11.971704   36023 main.go:141] libmachine: (ha-044175) Calling .GetSSHKeyPath
	I0805 23:20:11.971888   36023 main.go:141] libmachine: (ha-044175) Calling .GetSSHKeyPath
	I0805 23:20:11.972062   36023 main.go:141] libmachine: (ha-044175) Calling .GetSSHUsername
	I0805 23:20:11.972238   36023 main.go:141] libmachine: Using SSH client type: native
	I0805 23:20:11.972414   36023 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.57 22 <nil> <nil>}
	I0805 23:20:11.972433   36023 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0805 23:21:42.751850   36023 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0805 23:21:42.751890   36023 machine.go:97] duration metric: took 1m31.396656241s to provisionDockerMachine
	I0805 23:21:42.751905   36023 start.go:293] postStartSetup for "ha-044175" (driver="kvm2")
	I0805 23:21:42.751921   36023 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0805 23:21:42.751938   36023 main.go:141] libmachine: (ha-044175) Calling .DriverName
	I0805 23:21:42.752288   36023 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0805 23:21:42.752314   36023 main.go:141] libmachine: (ha-044175) Calling .GetSSHHostname
	I0805 23:21:42.755819   36023 main.go:141] libmachine: (ha-044175) DBG | domain ha-044175 has defined MAC address 52:54:00:d0:5f:e4 in network mk-ha-044175
	I0805 23:21:42.756358   36023 main.go:141] libmachine: (ha-044175) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d0:5f:e4", ip: ""} in network mk-ha-044175: {Iface:virbr1 ExpiryTime:2024-08-06 00:10:15 +0000 UTC Type:0 Mac:52:54:00:d0:5f:e4 Iaid: IPaddr:192.168.39.57 Prefix:24 Hostname:ha-044175 Clientid:01:52:54:00:d0:5f:e4}
	I0805 23:21:42.756389   36023 main.go:141] libmachine: (ha-044175) DBG | domain ha-044175 has defined IP address 192.168.39.57 and MAC address 52:54:00:d0:5f:e4 in network mk-ha-044175
	I0805 23:21:42.756526   36023 main.go:141] libmachine: (ha-044175) Calling .GetSSHPort
	I0805 23:21:42.756719   36023 main.go:141] libmachine: (ha-044175) Calling .GetSSHKeyPath
	I0805 23:21:42.756882   36023 main.go:141] libmachine: (ha-044175) Calling .GetSSHUsername
	I0805 23:21:42.757010   36023 sshutil.go:53] new ssh client: &{IP:192.168.39.57 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19373-9606/.minikube/machines/ha-044175/id_rsa Username:docker}
	I0805 23:21:42.840368   36023 ssh_runner.go:195] Run: cat /etc/os-release
	I0805 23:21:42.844976   36023 info.go:137] Remote host: Buildroot 2023.02.9
	I0805 23:21:42.845001   36023 filesync.go:126] Scanning /home/jenkins/minikube-integration/19373-9606/.minikube/addons for local assets ...
	I0805 23:21:42.845061   36023 filesync.go:126] Scanning /home/jenkins/minikube-integration/19373-9606/.minikube/files for local assets ...
	I0805 23:21:42.845164   36023 filesync.go:149] local asset: /home/jenkins/minikube-integration/19373-9606/.minikube/files/etc/ssl/certs/167922.pem -> 167922.pem in /etc/ssl/certs
	I0805 23:21:42.845176   36023 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19373-9606/.minikube/files/etc/ssl/certs/167922.pem -> /etc/ssl/certs/167922.pem
	I0805 23:21:42.845264   36023 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0805 23:21:42.855994   36023 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19373-9606/.minikube/files/etc/ssl/certs/167922.pem --> /etc/ssl/certs/167922.pem (1708 bytes)
	I0805 23:21:42.881752   36023 start.go:296] duration metric: took 129.831599ms for postStartSetup
	I0805 23:21:42.881823   36023 main.go:141] libmachine: (ha-044175) Calling .DriverName
	I0805 23:21:42.882108   36023 ssh_runner.go:195] Run: sudo ls --almost-all -1 /var/lib/minikube/backup
	I0805 23:21:42.882132   36023 main.go:141] libmachine: (ha-044175) Calling .GetSSHHostname
	I0805 23:21:42.884783   36023 main.go:141] libmachine: (ha-044175) DBG | domain ha-044175 has defined MAC address 52:54:00:d0:5f:e4 in network mk-ha-044175
	I0805 23:21:42.885247   36023 main.go:141] libmachine: (ha-044175) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d0:5f:e4", ip: ""} in network mk-ha-044175: {Iface:virbr1 ExpiryTime:2024-08-06 00:10:15 +0000 UTC Type:0 Mac:52:54:00:d0:5f:e4 Iaid: IPaddr:192.168.39.57 Prefix:24 Hostname:ha-044175 Clientid:01:52:54:00:d0:5f:e4}
	I0805 23:21:42.885275   36023 main.go:141] libmachine: (ha-044175) DBG | domain ha-044175 has defined IP address 192.168.39.57 and MAC address 52:54:00:d0:5f:e4 in network mk-ha-044175
	I0805 23:21:42.885398   36023 main.go:141] libmachine: (ha-044175) Calling .GetSSHPort
	I0805 23:21:42.885579   36023 main.go:141] libmachine: (ha-044175) Calling .GetSSHKeyPath
	I0805 23:21:42.885846   36023 main.go:141] libmachine: (ha-044175) Calling .GetSSHUsername
	I0805 23:21:42.885995   36023 sshutil.go:53] new ssh client: &{IP:192.168.39.57 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19373-9606/.minikube/machines/ha-044175/id_rsa Username:docker}
	W0805 23:21:42.966113   36023 fix.go:99] cannot read backup folder, skipping restore: read dir: sudo ls --almost-all -1 /var/lib/minikube/backup: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/backup': No such file or directory
	I0805 23:21:42.966136   36023 fix.go:56] duration metric: took 1m31.632810326s for fixHost
	I0805 23:21:42.966156   36023 main.go:141] libmachine: (ha-044175) Calling .GetSSHHostname
	I0805 23:21:42.968838   36023 main.go:141] libmachine: (ha-044175) DBG | domain ha-044175 has defined MAC address 52:54:00:d0:5f:e4 in network mk-ha-044175
	I0805 23:21:42.969290   36023 main.go:141] libmachine: (ha-044175) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d0:5f:e4", ip: ""} in network mk-ha-044175: {Iface:virbr1 ExpiryTime:2024-08-06 00:10:15 +0000 UTC Type:0 Mac:52:54:00:d0:5f:e4 Iaid: IPaddr:192.168.39.57 Prefix:24 Hostname:ha-044175 Clientid:01:52:54:00:d0:5f:e4}
	I0805 23:21:42.969319   36023 main.go:141] libmachine: (ha-044175) DBG | domain ha-044175 has defined IP address 192.168.39.57 and MAC address 52:54:00:d0:5f:e4 in network mk-ha-044175
	I0805 23:21:42.969493   36023 main.go:141] libmachine: (ha-044175) Calling .GetSSHPort
	I0805 23:21:42.969680   36023 main.go:141] libmachine: (ha-044175) Calling .GetSSHKeyPath
	I0805 23:21:42.969859   36023 main.go:141] libmachine: (ha-044175) Calling .GetSSHKeyPath
	I0805 23:21:42.969991   36023 main.go:141] libmachine: (ha-044175) Calling .GetSSHUsername
	I0805 23:21:42.970167   36023 main.go:141] libmachine: Using SSH client type: native
	I0805 23:21:42.970323   36023 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.57 22 <nil> <nil>}
	I0805 23:21:42.970332   36023 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0805 23:21:43.068012   36023 main.go:141] libmachine: SSH cmd err, output: <nil>: 1722900103.033249156
	
	I0805 23:21:43.068044   36023 fix.go:216] guest clock: 1722900103.033249156
	I0805 23:21:43.068056   36023 fix.go:229] Guest: 2024-08-05 23:21:43.033249156 +0000 UTC Remote: 2024-08-05 23:21:42.966143145 +0000 UTC m=+91.756743346 (delta=67.106011ms)
	I0805 23:21:43.068084   36023 fix.go:200] guest clock delta is within tolerance: 67.106011ms
	I0805 23:21:43.068093   36023 start.go:83] releasing machines lock for "ha-044175", held for 1m31.734781646s
	I0805 23:21:43.068118   36023 main.go:141] libmachine: (ha-044175) Calling .DriverName
	I0805 23:21:43.068390   36023 main.go:141] libmachine: (ha-044175) Calling .GetIP
	I0805 23:21:43.071393   36023 main.go:141] libmachine: (ha-044175) DBG | domain ha-044175 has defined MAC address 52:54:00:d0:5f:e4 in network mk-ha-044175
	I0805 23:21:43.071734   36023 main.go:141] libmachine: (ha-044175) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d0:5f:e4", ip: ""} in network mk-ha-044175: {Iface:virbr1 ExpiryTime:2024-08-06 00:10:15 +0000 UTC Type:0 Mac:52:54:00:d0:5f:e4 Iaid: IPaddr:192.168.39.57 Prefix:24 Hostname:ha-044175 Clientid:01:52:54:00:d0:5f:e4}
	I0805 23:21:43.071774   36023 main.go:141] libmachine: (ha-044175) DBG | domain ha-044175 has defined IP address 192.168.39.57 and MAC address 52:54:00:d0:5f:e4 in network mk-ha-044175
	I0805 23:21:43.071925   36023 main.go:141] libmachine: (ha-044175) Calling .DriverName
	I0805 23:21:43.072483   36023 main.go:141] libmachine: (ha-044175) Calling .DriverName
	I0805 23:21:43.072633   36023 main.go:141] libmachine: (ha-044175) Calling .DriverName
	I0805 23:21:43.072729   36023 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0805 23:21:43.072767   36023 main.go:141] libmachine: (ha-044175) Calling .GetSSHHostname
	I0805 23:21:43.072870   36023 ssh_runner.go:195] Run: cat /version.json
	I0805 23:21:43.072891   36023 main.go:141] libmachine: (ha-044175) Calling .GetSSHHostname
	I0805 23:21:43.075495   36023 main.go:141] libmachine: (ha-044175) DBG | domain ha-044175 has defined MAC address 52:54:00:d0:5f:e4 in network mk-ha-044175
	I0805 23:21:43.075569   36023 main.go:141] libmachine: (ha-044175) DBG | domain ha-044175 has defined MAC address 52:54:00:d0:5f:e4 in network mk-ha-044175
	I0805 23:21:43.075855   36023 main.go:141] libmachine: (ha-044175) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d0:5f:e4", ip: ""} in network mk-ha-044175: {Iface:virbr1 ExpiryTime:2024-08-06 00:10:15 +0000 UTC Type:0 Mac:52:54:00:d0:5f:e4 Iaid: IPaddr:192.168.39.57 Prefix:24 Hostname:ha-044175 Clientid:01:52:54:00:d0:5f:e4}
	I0805 23:21:43.075881   36023 main.go:141] libmachine: (ha-044175) DBG | domain ha-044175 has defined IP address 192.168.39.57 and MAC address 52:54:00:d0:5f:e4 in network mk-ha-044175
	I0805 23:21:43.075904   36023 main.go:141] libmachine: (ha-044175) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d0:5f:e4", ip: ""} in network mk-ha-044175: {Iface:virbr1 ExpiryTime:2024-08-06 00:10:15 +0000 UTC Type:0 Mac:52:54:00:d0:5f:e4 Iaid: IPaddr:192.168.39.57 Prefix:24 Hostname:ha-044175 Clientid:01:52:54:00:d0:5f:e4}
	I0805 23:21:43.075919   36023 main.go:141] libmachine: (ha-044175) DBG | domain ha-044175 has defined IP address 192.168.39.57 and MAC address 52:54:00:d0:5f:e4 in network mk-ha-044175
	I0805 23:21:43.076054   36023 main.go:141] libmachine: (ha-044175) Calling .GetSSHPort
	I0805 23:21:43.076168   36023 main.go:141] libmachine: (ha-044175) Calling .GetSSHPort
	I0805 23:21:43.076243   36023 main.go:141] libmachine: (ha-044175) Calling .GetSSHKeyPath
	I0805 23:21:43.076298   36023 main.go:141] libmachine: (ha-044175) Calling .GetSSHKeyPath
	I0805 23:21:43.076347   36023 main.go:141] libmachine: (ha-044175) Calling .GetSSHUsername
	I0805 23:21:43.076467   36023 sshutil.go:53] new ssh client: &{IP:192.168.39.57 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19373-9606/.minikube/machines/ha-044175/id_rsa Username:docker}
	I0805 23:21:43.076483   36023 main.go:141] libmachine: (ha-044175) Calling .GetSSHUsername
	I0805 23:21:43.076656   36023 sshutil.go:53] new ssh client: &{IP:192.168.39.57 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19373-9606/.minikube/machines/ha-044175/id_rsa Username:docker}
	I0805 23:21:43.152577   36023 ssh_runner.go:195] Run: systemctl --version
	I0805 23:21:43.176092   36023 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0805 23:21:43.426720   36023 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0805 23:21:43.436098   36023 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0805 23:21:43.436192   36023 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0805 23:21:43.451465   36023 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I0805 23:21:43.451490   36023 start.go:495] detecting cgroup driver to use...
	I0805 23:21:43.451550   36023 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0805 23:21:43.478451   36023 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0805 23:21:43.497720   36023 docker.go:217] disabling cri-docker service (if available) ...
	I0805 23:21:43.497777   36023 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0805 23:21:43.525877   36023 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0805 23:21:43.542713   36023 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0805 23:21:43.708709   36023 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0805 23:21:43.855703   36023 docker.go:233] disabling docker service ...
	I0805 23:21:43.855783   36023 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0805 23:21:43.873752   36023 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0805 23:21:43.887975   36023 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0805 23:21:44.046539   36023 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0805 23:21:44.196442   36023 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0805 23:21:44.210803   36023 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0805 23:21:44.239367   36023 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I0805 23:21:44.239419   36023 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0805 23:21:44.250894   36023 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0805 23:21:44.250973   36023 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0805 23:21:44.261409   36023 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0805 23:21:44.271788   36023 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0805 23:21:44.282752   36023 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0805 23:21:44.293358   36023 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0805 23:21:44.303547   36023 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0805 23:21:44.314761   36023 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0805 23:21:44.324707   36023 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0805 23:21:44.333905   36023 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0805 23:21:44.343110   36023 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0805 23:21:44.489687   36023 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0805 23:21:53.882601   36023 ssh_runner.go:235] Completed: sudo systemctl restart crio: (9.392872217s)
	I0805 23:21:53.882631   36023 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0805 23:21:53.882683   36023 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0805 23:21:53.888039   36023 start.go:563] Will wait 60s for crictl version
	I0805 23:21:53.888102   36023 ssh_runner.go:195] Run: which crictl
	I0805 23:21:53.892001   36023 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0805 23:21:53.933976   36023 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0805 23:21:53.934044   36023 ssh_runner.go:195] Run: crio --version
	I0805 23:21:53.964361   36023 ssh_runner.go:195] Run: crio --version
	I0805 23:21:53.995549   36023 out.go:177] * Preparing Kubernetes v1.30.3 on CRI-O 1.29.1 ...
	I0805 23:21:53.997011   36023 main.go:141] libmachine: (ha-044175) Calling .GetIP
	I0805 23:21:53.999763   36023 main.go:141] libmachine: (ha-044175) DBG | domain ha-044175 has defined MAC address 52:54:00:d0:5f:e4 in network mk-ha-044175
	I0805 23:21:54.000177   36023 main.go:141] libmachine: (ha-044175) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d0:5f:e4", ip: ""} in network mk-ha-044175: {Iface:virbr1 ExpiryTime:2024-08-06 00:10:15 +0000 UTC Type:0 Mac:52:54:00:d0:5f:e4 Iaid: IPaddr:192.168.39.57 Prefix:24 Hostname:ha-044175 Clientid:01:52:54:00:d0:5f:e4}
	I0805 23:21:54.000197   36023 main.go:141] libmachine: (ha-044175) DBG | domain ha-044175 has defined IP address 192.168.39.57 and MAC address 52:54:00:d0:5f:e4 in network mk-ha-044175
	I0805 23:21:54.000365   36023 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0805 23:21:54.005168   36023 kubeadm.go:883] updating cluster {Name:ha-044175 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19339/minikube-v1.33.1-1722248113-19339-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.3 Cl
usterName:ha-044175 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.57 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.112 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m03 IP:192.168.39.201 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m04 IP:192.168.39.228 Port:0 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false fre
shpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L Mou
ntGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0805 23:21:54.005291   36023 preload.go:131] Checking if preload exists for k8s version v1.30.3 and runtime crio
	I0805 23:21:54.005344   36023 ssh_runner.go:195] Run: sudo crictl images --output json
	I0805 23:21:54.051772   36023 crio.go:514] all images are preloaded for cri-o runtime.
	I0805 23:21:54.051790   36023 crio.go:433] Images already preloaded, skipping extraction
	I0805 23:21:54.051849   36023 ssh_runner.go:195] Run: sudo crictl images --output json
	I0805 23:21:54.086832   36023 crio.go:514] all images are preloaded for cri-o runtime.
	I0805 23:21:54.086857   36023 cache_images.go:84] Images are preloaded, skipping loading
	I0805 23:21:54.086868   36023 kubeadm.go:934] updating node { 192.168.39.57 8443 v1.30.3 crio true true} ...
	I0805 23:21:54.086983   36023 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.30.3/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=ha-044175 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.57
	
	[Install]
	 config:
	{KubernetesVersion:v1.30.3 ClusterName:ha-044175 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0805 23:21:54.087101   36023 ssh_runner.go:195] Run: crio config
	I0805 23:21:54.137581   36023 cni.go:84] Creating CNI manager for ""
	I0805 23:21:54.137603   36023 cni.go:136] multinode detected (4 nodes found), recommending kindnet
	I0805 23:21:54.137615   36023 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0805 23:21:54.137639   36023 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.57 APIServerPort:8443 KubernetesVersion:v1.30.3 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:ha-044175 NodeName:ha-044175 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.57"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.57 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernetes/m
anifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0805 23:21:54.137779   36023 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.57
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "ha-044175"
	  kubeletExtraArgs:
	    node-ip: 192.168.39.57
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.57"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.30.3
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0805 23:21:54.137804   36023 kube-vip.go:115] generating kube-vip config ...
	I0805 23:21:54.137857   36023 ssh_runner.go:195] Run: sudo sh -c "modprobe --all ip_vs ip_vs_rr ip_vs_wrr ip_vs_sh nf_conntrack"
	I0805 23:21:54.149472   36023 kube-vip.go:167] auto-enabling control-plane load-balancing in kube-vip
	I0805 23:21:54.149596   36023 kube-vip.go:137] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_nodename
	      valueFrom:
	        fieldRef:
	          fieldPath: spec.nodeName
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 192.168.39.254
	    - name: prometheus_server
	      value: :2112
	    - name : lb_enable
	      value: "true"
	    - name: lb_port
	      value: "8443"
	    image: ghcr.io/kube-vip/kube-vip:v0.8.0
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/admin.conf"
	    name: kubeconfig
	status: {}
	I0805 23:21:54.149650   36023 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.30.3
	I0805 23:21:54.160037   36023 binaries.go:44] Found k8s binaries, skipping transfer
	I0805 23:21:54.160090   36023 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube /etc/kubernetes/manifests
	I0805 23:21:54.169496   36023 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (308 bytes)
	I0805 23:21:54.187044   36023 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0805 23:21:54.205158   36023 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2150 bytes)
	I0805 23:21:54.222292   36023 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1441 bytes)
	I0805 23:21:54.240454   36023 ssh_runner.go:195] Run: grep 192.168.39.254	control-plane.minikube.internal$ /etc/hosts
	I0805 23:21:54.245443   36023 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0805 23:21:54.392594   36023 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0805 23:21:54.407717   36023 certs.go:68] Setting up /home/jenkins/minikube-integration/19373-9606/.minikube/profiles/ha-044175 for IP: 192.168.39.57
	I0805 23:21:54.407738   36023 certs.go:194] generating shared ca certs ...
	I0805 23:21:54.407753   36023 certs.go:226] acquiring lock for ca certs: {Name:mkf35a042c1656d191f542eee7fa087aad4d29d8 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0805 23:21:54.407879   36023 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19373-9606/.minikube/ca.key
	I0805 23:21:54.407930   36023 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19373-9606/.minikube/proxy-client-ca.key
	I0805 23:21:54.407937   36023 certs.go:256] generating profile certs ...
	I0805 23:21:54.408001   36023 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19373-9606/.minikube/profiles/ha-044175/client.key
	I0805 23:21:54.408027   36023 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/19373-9606/.minikube/profiles/ha-044175/apiserver.key.cb584d1b
	I0805 23:21:54.408040   36023 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19373-9606/.minikube/profiles/ha-044175/apiserver.crt.cb584d1b with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.39.57 192.168.39.112 192.168.39.201 192.168.39.254]
	I0805 23:21:54.763069   36023 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19373-9606/.minikube/profiles/ha-044175/apiserver.crt.cb584d1b ...
	I0805 23:21:54.763103   36023 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19373-9606/.minikube/profiles/ha-044175/apiserver.crt.cb584d1b: {Name:mk1a963e63c48b245bb8cae0d4c77d2e6a272041 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0805 23:21:54.763266   36023 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19373-9606/.minikube/profiles/ha-044175/apiserver.key.cb584d1b ...
	I0805 23:21:54.763280   36023 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19373-9606/.minikube/profiles/ha-044175/apiserver.key.cb584d1b: {Name:mkb0217b66b1058ef522d13f78348c47d2230a95 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0805 23:21:54.763344   36023 certs.go:381] copying /home/jenkins/minikube-integration/19373-9606/.minikube/profiles/ha-044175/apiserver.crt.cb584d1b -> /home/jenkins/minikube-integration/19373-9606/.minikube/profiles/ha-044175/apiserver.crt
	I0805 23:21:54.763477   36023 certs.go:385] copying /home/jenkins/minikube-integration/19373-9606/.minikube/profiles/ha-044175/apiserver.key.cb584d1b -> /home/jenkins/minikube-integration/19373-9606/.minikube/profiles/ha-044175/apiserver.key
	I0805 23:21:54.763599   36023 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19373-9606/.minikube/profiles/ha-044175/proxy-client.key
	I0805 23:21:54.763614   36023 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19373-9606/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I0805 23:21:54.763627   36023 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19373-9606/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I0805 23:21:54.763637   36023 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19373-9606/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0805 23:21:54.763650   36023 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19373-9606/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0805 23:21:54.763661   36023 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19373-9606/.minikube/profiles/ha-044175/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0805 23:21:54.763671   36023 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19373-9606/.minikube/profiles/ha-044175/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0805 23:21:54.763687   36023 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19373-9606/.minikube/profiles/ha-044175/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0805 23:21:54.763699   36023 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19373-9606/.minikube/profiles/ha-044175/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0805 23:21:54.763746   36023 certs.go:484] found cert: /home/jenkins/minikube-integration/19373-9606/.minikube/certs/16792.pem (1338 bytes)
	W0805 23:21:54.763777   36023 certs.go:480] ignoring /home/jenkins/minikube-integration/19373-9606/.minikube/certs/16792_empty.pem, impossibly tiny 0 bytes
	I0805 23:21:54.763783   36023 certs.go:484] found cert: /home/jenkins/minikube-integration/19373-9606/.minikube/certs/ca-key.pem (1679 bytes)
	I0805 23:21:54.763803   36023 certs.go:484] found cert: /home/jenkins/minikube-integration/19373-9606/.minikube/certs/ca.pem (1082 bytes)
	I0805 23:21:54.763820   36023 certs.go:484] found cert: /home/jenkins/minikube-integration/19373-9606/.minikube/certs/cert.pem (1123 bytes)
	I0805 23:21:54.763841   36023 certs.go:484] found cert: /home/jenkins/minikube-integration/19373-9606/.minikube/certs/key.pem (1679 bytes)
	I0805 23:21:54.763875   36023 certs.go:484] found cert: /home/jenkins/minikube-integration/19373-9606/.minikube/files/etc/ssl/certs/167922.pem (1708 bytes)
	I0805 23:21:54.763899   36023 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19373-9606/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0805 23:21:54.763912   36023 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19373-9606/.minikube/certs/16792.pem -> /usr/share/ca-certificates/16792.pem
	I0805 23:21:54.763923   36023 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19373-9606/.minikube/files/etc/ssl/certs/167922.pem -> /usr/share/ca-certificates/167922.pem
	I0805 23:21:54.764429   36023 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19373-9606/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0805 23:21:54.793469   36023 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19373-9606/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0805 23:21:54.819427   36023 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19373-9606/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0805 23:21:54.845089   36023 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19373-9606/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0805 23:21:54.870074   36023 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19373-9606/.minikube/profiles/ha-044175/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1440 bytes)
	I0805 23:21:54.895799   36023 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19373-9606/.minikube/profiles/ha-044175/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0805 23:21:54.920920   36023 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19373-9606/.minikube/profiles/ha-044175/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0805 23:21:54.946520   36023 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19373-9606/.minikube/profiles/ha-044175/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0805 23:21:54.970748   36023 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19373-9606/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0805 23:21:54.994658   36023 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19373-9606/.minikube/certs/16792.pem --> /usr/share/ca-certificates/16792.pem (1338 bytes)
	I0805 23:21:55.019117   36023 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19373-9606/.minikube/files/etc/ssl/certs/167922.pem --> /usr/share/ca-certificates/167922.pem (1708 bytes)
	I0805 23:21:55.043437   36023 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0805 23:21:55.060413   36023 ssh_runner.go:195] Run: openssl version
	I0805 23:21:55.066853   36023 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/16792.pem && ln -fs /usr/share/ca-certificates/16792.pem /etc/ssl/certs/16792.pem"
	I0805 23:21:55.077479   36023 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/16792.pem
	I0805 23:21:55.082037   36023 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Aug  5 23:03 /usr/share/ca-certificates/16792.pem
	I0805 23:21:55.082095   36023 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/16792.pem
	I0805 23:21:55.087808   36023 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/16792.pem /etc/ssl/certs/51391683.0"
	I0805 23:21:55.097099   36023 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/167922.pem && ln -fs /usr/share/ca-certificates/167922.pem /etc/ssl/certs/167922.pem"
	I0805 23:21:55.107692   36023 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/167922.pem
	I0805 23:21:55.112146   36023 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Aug  5 23:03 /usr/share/ca-certificates/167922.pem
	I0805 23:21:55.112182   36023 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/167922.pem
	I0805 23:21:55.117763   36023 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/167922.pem /etc/ssl/certs/3ec20f2e.0"
	I0805 23:21:55.126784   36023 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0805 23:21:55.137634   36023 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0805 23:21:55.142200   36023 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Aug  5 22:50 /usr/share/ca-certificates/minikubeCA.pem
	I0805 23:21:55.142243   36023 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0805 23:21:55.148293   36023 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0805 23:21:55.157937   36023 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0805 23:21:55.162495   36023 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0805 23:21:55.168345   36023 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0805 23:21:55.174266   36023 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0805 23:21:55.180002   36023 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0805 23:21:55.185923   36023 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0805 23:21:55.191533   36023 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0805 23:21:55.197330   36023 kubeadm.go:392] StartCluster: {Name:ha-044175 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19339/minikube-v1.33.1-1722248113-19339-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.3 Clust
erName:ha-044175 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.57 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.112 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m03 IP:192.168.39.201 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m04 IP:192.168.39.228 Port:0 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshp
od:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountG
ID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0805 23:21:55.197466   36023 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0805 23:21:55.197517   36023 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0805 23:21:55.237272   36023 cri.go:89] found id: "d46bdf5c93d9a335000c2d92e3814610ae1e74850c28c7ec832821e7ed10c1b6"
	I0805 23:21:55.237297   36023 cri.go:89] found id: "d84c6fc25afe5bdf844e9489b06726f7f183fbc38a418926f652ec79c6e9e559"
	I0805 23:21:55.237301   36023 cri.go:89] found id: "1a47cf65b14975f4678f4b5794ac4f45733e19f22e2b659a18baad22d1394d26"
	I0805 23:21:55.237304   36023 cri.go:89] found id: "bb38cdefb5246fc31da8b49e32a081eb2003b9c9de9c7c5941b6e563179848e7"
	I0805 23:21:55.237306   36023 cri.go:89] found id: "2e11762a0814597bbc6d2cdd8b65c5f03a1970af0ad39df0b7e88eb542fad309"
	I0805 23:21:55.237309   36023 cri.go:89] found id: "4617bbebfc992da16ee550b4c2c74a6d4c58299fe2518f6d24c3a10b1e02c941"
	I0805 23:21:55.237312   36023 cri.go:89] found id: "e65205c398221a15eecea1ec1092d54f364a44886b05149400c7be5ffafc3285"
	I0805 23:21:55.237314   36023 cri.go:89] found id: "97fa319bea82614cab7525f9052bcc8a09fad765b260045dbf0d0fa0ca0290b2"
	I0805 23:21:55.237316   36023 cri.go:89] found id: "04c382fd4a32fe8685a6f643ecf7a291e4d542c2223975f9df92991fe566b12a"
	I0805 23:21:55.237321   36023 cri.go:89] found id: "40fc9655d4bc3a83cded30a0628a93c01856e1db81e027d8d131004479df9ed3"
	I0805 23:21:55.237323   36023 cri.go:89] found id: "b0893967672c7dc591bbcf220e56601b8a46fc11f07e63adbadaddec59ec1803"
	I0805 23:21:55.237328   36023 cri.go:89] found id: "2a85f2254a23cdec7e89ff8de2e31b06ddf2853808330965760217f1fd834004"
	I0805 23:21:55.237331   36023 cri.go:89] found id: "0c90a080943378c8bb82560d92b4399ff4ea03ab68d06f0de21852e1df609090"
	I0805 23:21:55.237333   36023 cri.go:89] found id: "52e65ab51d03f5a6abf04b86a788a251259de2c7971b7f676c0b5c5eb33e5849"
	I0805 23:21:55.237337   36023 cri.go:89] found id: ""
	I0805 23:21:55.237377   36023 ssh_runner.go:195] Run: sudo runc list -f json
	
	
	==> CRI-O <==
	Aug 05 23:25:16 ha-044175 crio[3920]: time="2024-08-05 23:25:16.393947715Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1722900316393921803,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:154770,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=ef8c3a5d-7770-4b8c-aa71-84765cb3330d name=/runtime.v1.ImageService/ImageFsInfo
	Aug 05 23:25:16 ha-044175 crio[3920]: time="2024-08-05 23:25:16.394550184Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=59390737-252c-40c5-ba30-52a4707e999e name=/runtime.v1.RuntimeService/ListContainers
	Aug 05 23:25:16 ha-044175 crio[3920]: time="2024-08-05 23:25:16.394605980Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=59390737-252c-40c5-ba30-52a4707e999e name=/runtime.v1.RuntimeService/ListContainers
	Aug 05 23:25:16 ha-044175 crio[3920]: time="2024-08-05 23:25:16.395023795Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:fd7e94739a082d7384a7b998066384667954ebe9cc11847395a104db1a104317,PodSandboxId:77ac7fe6a83e0516a216fd1d55d638ed87cfcdf5723e5e28856ee5df04b14760,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:4,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1722900187738951526,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d30d1a5b-cfbe-4de6-a964-75c32e5dbf62,},Annotations:map[string]string{io.kubernetes.container.hash: 4378961a,io.kubernetes.container.restartCount: 4,io.kubernetes.container.t
erminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6528b925e75da18994cd673a201712eb241eeff865202c130034f40f0a350bb8,PodSandboxId:0f530473c6518daba2504d48da181c58689c44ffd19685987529bd79bbfdd8bf,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,State:CONTAINER_RUNNING,CreatedAt:1722900169724492024,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-044175,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: de889c914a63f88b5552d92d7c04005b,},Annotations:map[string]string{io.kubernetes.container.hash: bcb4918f,io.kubernetes.container.restartCount: 2,io.kubernetes.conta
iner.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7cf9b7cb63859c9cfe968fc20b9dacecfc681905714bc14a19a78ba20314f787,PodSandboxId:f6231b23266daa7beda5c2eb7b84162e5fe7c14db8b3c9ddcd78304bf2ec722c,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:3,},Image:&ImageSpec{Image:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,State:CONTAINER_RUNNING,CreatedAt:1722900162729981582,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-044175,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5280d6dbae40883a34349dd31a13a779,},Annotations:map[string]string{io.kubernetes.container.hash: bd2d1b8f,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessa
gePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:77b449aa0776d43116dbd794f0249b6e5fc5d747d7f6a8bc9604aebafc20ba74,PodSandboxId:2ff7308f4be3e77295c107b65333964734b52e07163e7f28b5c122b5225d1d4d,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1722900156038540995,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-fc5497c4f-wmfql,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: bfc8bad7-d43d-4beb-991e-339a4ce96ab5,},Annotations:map[string]string{io.kubernetes.container.hash: fc00d50e,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kube
rnetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:91055df10dc934cc3b2614f239bef7e465aa9809f34bba79c6de90604d74f7ca,PodSandboxId:68d4fc648e15948920c68a4aad97654ab8e34af2ae6e4e2ecdd3c173abf8148d,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:0,},Image:&ImageSpec{Image:38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12,State:CONTAINER_RUNNING,CreatedAt:1722900137200072773,Labels:map[string]string{io.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-044175,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: fd673cb8fe1efcc8b643555b76eaad93,},Annotations:map[string]string{io.kubernetes.container.hash: 3d55e6dc,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePoli
cy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a11b5de6fd020c3af228be69825e370ecef21ab78d774519dac722cf721bb6e6,PodSandboxId:3f0c789e63c6b8da2eaddf246bf22fac58253370f7977c637db0653e6efb8ad4,Metadata:&ContainerMetadata{Name:coredns,Attempt:2,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1722900124470501686,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-g9bml,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: fd474413-e416-48db-a7bf-f3c40675819b,},Annotations:map[string]string{io.kubernetes.container.hash: 1bd67db4,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"cont
ainerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:224c4967d5e92ceb088df11f70040bbd62d3bf073b04182cb32278b2db2419b1,PodSandboxId:77ac7fe6a83e0516a216fd1d55d638ed87cfcdf5723e5e28856ee5df04b14760,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:3,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1722900122857826642,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d30d1a5b-cfbe-4de6-a964-75c32e5dbf62,},Annotations:map[string]string{io.kubernete
s.container.hash: 4378961a,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:97768d7c5371dd0c06071b82c8baadd28ee604281812facf0dbd4a723ea92274,PodSandboxId:b949dd01383277f7e3efd577b7b6302bc9888e365a2106061d4a3a3119168a36,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:1,},Image:&ImageSpec{Image:917d7814b9b5b870a04ef7bbee5414485a7ba8385be9c26e1df27463f6fb6557,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:917d7814b9b5b870a04ef7bbee5414485a7ba8385be9c26e1df27463f6fb6557,State:CONTAINER_RUNNING,CreatedAt:1722900122962106602,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-xqx4z,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8455705e-b140-4f1e-abff-6a71bbb5415f,},Annotations:map[string]string{io.kubernetes.container.hash: 4a9283b6,io.kube
rnetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8aecc482892c69f412b19a67ecbfb961e4799ff113afee62cf254d8accc9e43a,PodSandboxId:e82fdb05fd230a5ff78128ae533e9617633b9f37f9a0671378ee9706bc2188c5,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1722900122848074225,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-vzhst,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f9c09745-be29-4403-9e7d-f9e4eaae5cac,},Annotations:map[string]string{io.kubernetes.container.hash: 1a8c310a,io.kubernetes.container.ports: [{\"nam
e\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:da62836e55aaaf8eee39a34113a3d41ba6489986d26134bed80020f8c7164507,PodSandboxId:5d40c713023d2ce8f1fd3f024181a8566c041373be34fbbdd28a7966391af628,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1722900122740850436,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-044175,io.kubernetes.pod.namespace: ku
be-system,io.kubernetes.pod.uid: 47fd3d59fe4024c671f4b57dbae12a83,},Annotations:map[string]string{io.kubernetes.container.hash: fa9a7bc3,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5f43d5e7445c285e5783c937039be219df8aaea8c9db899259f8d24c895a378c,PodSandboxId:1e5c99969ac60dcfb40f80c63b009c0e6efc07de9fccdd5c48b9097ed4f8bf63,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,State:CONTAINER_RUNNING,CreatedAt:1722900122542678745,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-vj5sd,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d6c9
cdcb-e1b7-44c8-a6e3-5e5aeb76ba03,},Annotations:map[string]string{io.kubernetes.container.hash: a40979c9,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:dd436770dad332628ad6a3b7fea663d52dda62901d07f6c1bfa5cf82ddae4f61,PodSandboxId:0f530473c6518daba2504d48da181c58689c44ffd19685987529bd79bbfdd8bf,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,State:CONTAINER_EXITED,CreatedAt:1722900122697717291,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-044175,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.
uid: de889c914a63f88b5552d92d7c04005b,},Annotations:map[string]string{io.kubernetes.container.hash: bcb4918f,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:95d6da5b264d99c2ae66291b9df0943d6f8ac4b1743a5bef2caebaaa9fa1694c,PodSandboxId:f6231b23266daa7beda5c2eb7b84162e5fe7c14db8b3c9ddcd78304bf2ec722c,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,State:CONTAINER_EXITED,CreatedAt:1722900122673726758,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-044175,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5280d6dbae40883a3
4349dd31a13a779,},Annotations:map[string]string{io.kubernetes.container.hash: bd2d1b8f,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5537b3a8dbcb27d26dc336a48652fdd3385ec0fb3b5169e72e472a665bc2e3ed,PodSandboxId:0b1220acf56ca1985bed119e03dfdc76cb09d54439c45a2488b7b06933c1f3be,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,State:CONTAINER_RUNNING,CreatedAt:1722900122644546831,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-044175,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 87091e6c521c934e57911d0cd84fc454,},Ann
otations:map[string]string{io.kubernetes.container.hash: 7337c8d9,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d46bdf5c93d9a335000c2d92e3814610ae1e74850c28c7ec832821e7ed10c1b6,PodSandboxId:212b1287cb785d37bec039a02eceff99c8d4258dd1905092b47149fba9f31b8e,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1722900103405605088,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-g9bml,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: fd474413-e416-48db-a7bf-f3c40675819b,},Annotations:map[string]string{io.ku
bernetes.container.hash: 1bd67db4,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:14f7140ac408890dd788c7a9d6a9857531edad86ff751157ac035e6ab0d4afdc,PodSandboxId:1bf94d816bd6b0f9325f20c0b2453330291a5dfa79448419ddd925a97f951bb9,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_EXITED,CreatedAt:1722899618925272516,Labels:map[string]str
ing{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-fc5497c4f-wmfql,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: bfc8bad7-d43d-4beb-991e-339a4ce96ab5,},Annotations:map[string]string{io.kubernetes.container.hash: fc00d50e,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e65205c398221a15eecea1ec1092d54f364a44886b05149400c7be5ffafc3285,PodSandboxId:0df1c00cbbb9d6891997d631537dd7662e552d8dca3cea20f0b653ed34f6f7bb,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1722899473822035995,Labels:map[string]string{io.kubernetes.container.name: cor
edns,io.kubernetes.pod.name: coredns-7db6d8ff4d-vzhst,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f9c09745-be29-4403-9e7d-f9e4eaae5cac,},Annotations:map[string]string{io.kubernetes.container.hash: 1a8c310a,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:97fa319bea82614cab7525f9052bcc8a09fad765b260045dbf0d0fa0ca0290b2,PodSandboxId:4f369251bc6de76b6eba2d8a6404cb53a6bcba17f58bd09854de9edd65d080fa,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:docker.io/kindest/kindnetd@sha256:4067b91686869e19bac601aec305ba55d2e74cdcb91347869bfb4fd3a26cd3c3,Annotations:map[string]
string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:917d7814b9b5b870a04ef7bbee5414485a7ba8385be9c26e1df27463f6fb6557,State:CONTAINER_EXITED,CreatedAt:1722899461696983959,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-xqx4z,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8455705e-b140-4f1e-abff-6a71bbb5415f,},Annotations:map[string]string{io.kubernetes.container.hash: 4a9283b6,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:04c382fd4a32fe8685a6f643ecf7a291e4d542c2223975f9df92991fe566b12a,PodSandboxId:b7b77d3f5c8a24f9906eb41c479b7254cd21f7c4d0c34b7014bdfa5f666df829,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,Annotations:map[string]string{},UserSpecifiedImage:,Runtime
Handler:,},ImageRef:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,State:CONTAINER_EXITED,CreatedAt:1722899457757352731,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-vj5sd,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d6c9cdcb-e1b7-44c8-a6e3-5e5aeb76ba03,},Annotations:map[string]string{io.kubernetes.container.hash: a40979c9,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b0893967672c7dc591bbcf220e56601b8a46fc11f07e63adbadaddec59ec1803,PodSandboxId:c7f5da3aca5fb3bac198b9144677aac33c3f5317946dad29f46e726a35d2c596,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f06
2788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_EXITED,CreatedAt:1722899438287916526,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-044175,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 47fd3d59fe4024c671f4b57dbae12a83,},Annotations:map[string]string{io.kubernetes.container.hash: fa9a7bc3,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2a85f2254a23cdec7e89ff8de2e31b06ddf2853808330965760217f1fd834004,PodSandboxId:57dd6eb50740256e4db3c59d0c1d850b0ba784d01abbeb7f8ea139160576fc43,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795
e2,State:CONTAINER_EXITED,CreatedAt:1722899438266931166,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-044175,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 87091e6c521c934e57911d0cd84fc454,},Annotations:map[string]string{io.kubernetes.container.hash: 7337c8d9,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=59390737-252c-40c5-ba30-52a4707e999e name=/runtime.v1.RuntimeService/ListContainers
	Aug 05 23:25:16 ha-044175 crio[3920]: time="2024-08-05 23:25:16.443180011Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=04b97ceb-fec2-459e-a1d4-d74660ccdb8b name=/runtime.v1.RuntimeService/Version
	Aug 05 23:25:16 ha-044175 crio[3920]: time="2024-08-05 23:25:16.443257259Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=04b97ceb-fec2-459e-a1d4-d74660ccdb8b name=/runtime.v1.RuntimeService/Version
	Aug 05 23:25:16 ha-044175 crio[3920]: time="2024-08-05 23:25:16.444524834Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=43a876c3-5dc3-4bb5-aed9-9436420ce445 name=/runtime.v1.ImageService/ImageFsInfo
	Aug 05 23:25:16 ha-044175 crio[3920]: time="2024-08-05 23:25:16.445250705Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1722900316445224605,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:154770,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=43a876c3-5dc3-4bb5-aed9-9436420ce445 name=/runtime.v1.ImageService/ImageFsInfo
	Aug 05 23:25:16 ha-044175 crio[3920]: time="2024-08-05 23:25:16.446205899Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=45a17831-9154-4d47-9cb7-579370837150 name=/runtime.v1.RuntimeService/ListContainers
	Aug 05 23:25:16 ha-044175 crio[3920]: time="2024-08-05 23:25:16.446286612Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=45a17831-9154-4d47-9cb7-579370837150 name=/runtime.v1.RuntimeService/ListContainers
	Aug 05 23:25:16 ha-044175 crio[3920]: time="2024-08-05 23:25:16.447682284Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:fd7e94739a082d7384a7b998066384667954ebe9cc11847395a104db1a104317,PodSandboxId:77ac7fe6a83e0516a216fd1d55d638ed87cfcdf5723e5e28856ee5df04b14760,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:4,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1722900187738951526,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d30d1a5b-cfbe-4de6-a964-75c32e5dbf62,},Annotations:map[string]string{io.kubernetes.container.hash: 4378961a,io.kubernetes.container.restartCount: 4,io.kubernetes.container.t
erminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6528b925e75da18994cd673a201712eb241eeff865202c130034f40f0a350bb8,PodSandboxId:0f530473c6518daba2504d48da181c58689c44ffd19685987529bd79bbfdd8bf,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,State:CONTAINER_RUNNING,CreatedAt:1722900169724492024,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-044175,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: de889c914a63f88b5552d92d7c04005b,},Annotations:map[string]string{io.kubernetes.container.hash: bcb4918f,io.kubernetes.container.restartCount: 2,io.kubernetes.conta
iner.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7cf9b7cb63859c9cfe968fc20b9dacecfc681905714bc14a19a78ba20314f787,PodSandboxId:f6231b23266daa7beda5c2eb7b84162e5fe7c14db8b3c9ddcd78304bf2ec722c,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:3,},Image:&ImageSpec{Image:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,State:CONTAINER_RUNNING,CreatedAt:1722900162729981582,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-044175,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5280d6dbae40883a34349dd31a13a779,},Annotations:map[string]string{io.kubernetes.container.hash: bd2d1b8f,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessa
gePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:77b449aa0776d43116dbd794f0249b6e5fc5d747d7f6a8bc9604aebafc20ba74,PodSandboxId:2ff7308f4be3e77295c107b65333964734b52e07163e7f28b5c122b5225d1d4d,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1722900156038540995,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-fc5497c4f-wmfql,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: bfc8bad7-d43d-4beb-991e-339a4ce96ab5,},Annotations:map[string]string{io.kubernetes.container.hash: fc00d50e,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kube
rnetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:91055df10dc934cc3b2614f239bef7e465aa9809f34bba79c6de90604d74f7ca,PodSandboxId:68d4fc648e15948920c68a4aad97654ab8e34af2ae6e4e2ecdd3c173abf8148d,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:0,},Image:&ImageSpec{Image:38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12,State:CONTAINER_RUNNING,CreatedAt:1722900137200072773,Labels:map[string]string{io.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-044175,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: fd673cb8fe1efcc8b643555b76eaad93,},Annotations:map[string]string{io.kubernetes.container.hash: 3d55e6dc,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePoli
cy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a11b5de6fd020c3af228be69825e370ecef21ab78d774519dac722cf721bb6e6,PodSandboxId:3f0c789e63c6b8da2eaddf246bf22fac58253370f7977c637db0653e6efb8ad4,Metadata:&ContainerMetadata{Name:coredns,Attempt:2,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1722900124470501686,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-g9bml,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: fd474413-e416-48db-a7bf-f3c40675819b,},Annotations:map[string]string{io.kubernetes.container.hash: 1bd67db4,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"cont
ainerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:224c4967d5e92ceb088df11f70040bbd62d3bf073b04182cb32278b2db2419b1,PodSandboxId:77ac7fe6a83e0516a216fd1d55d638ed87cfcdf5723e5e28856ee5df04b14760,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:3,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1722900122857826642,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d30d1a5b-cfbe-4de6-a964-75c32e5dbf62,},Annotations:map[string]string{io.kubernete
s.container.hash: 4378961a,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:97768d7c5371dd0c06071b82c8baadd28ee604281812facf0dbd4a723ea92274,PodSandboxId:b949dd01383277f7e3efd577b7b6302bc9888e365a2106061d4a3a3119168a36,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:1,},Image:&ImageSpec{Image:917d7814b9b5b870a04ef7bbee5414485a7ba8385be9c26e1df27463f6fb6557,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:917d7814b9b5b870a04ef7bbee5414485a7ba8385be9c26e1df27463f6fb6557,State:CONTAINER_RUNNING,CreatedAt:1722900122962106602,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-xqx4z,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8455705e-b140-4f1e-abff-6a71bbb5415f,},Annotations:map[string]string{io.kubernetes.container.hash: 4a9283b6,io.kube
rnetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8aecc482892c69f412b19a67ecbfb961e4799ff113afee62cf254d8accc9e43a,PodSandboxId:e82fdb05fd230a5ff78128ae533e9617633b9f37f9a0671378ee9706bc2188c5,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1722900122848074225,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-vzhst,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f9c09745-be29-4403-9e7d-f9e4eaae5cac,},Annotations:map[string]string{io.kubernetes.container.hash: 1a8c310a,io.kubernetes.container.ports: [{\"nam
e\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:da62836e55aaaf8eee39a34113a3d41ba6489986d26134bed80020f8c7164507,PodSandboxId:5d40c713023d2ce8f1fd3f024181a8566c041373be34fbbdd28a7966391af628,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1722900122740850436,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-044175,io.kubernetes.pod.namespace: ku
be-system,io.kubernetes.pod.uid: 47fd3d59fe4024c671f4b57dbae12a83,},Annotations:map[string]string{io.kubernetes.container.hash: fa9a7bc3,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5f43d5e7445c285e5783c937039be219df8aaea8c9db899259f8d24c895a378c,PodSandboxId:1e5c99969ac60dcfb40f80c63b009c0e6efc07de9fccdd5c48b9097ed4f8bf63,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,State:CONTAINER_RUNNING,CreatedAt:1722900122542678745,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-vj5sd,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d6c9
cdcb-e1b7-44c8-a6e3-5e5aeb76ba03,},Annotations:map[string]string{io.kubernetes.container.hash: a40979c9,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:dd436770dad332628ad6a3b7fea663d52dda62901d07f6c1bfa5cf82ddae4f61,PodSandboxId:0f530473c6518daba2504d48da181c58689c44ffd19685987529bd79bbfdd8bf,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,State:CONTAINER_EXITED,CreatedAt:1722900122697717291,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-044175,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.
uid: de889c914a63f88b5552d92d7c04005b,},Annotations:map[string]string{io.kubernetes.container.hash: bcb4918f,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:95d6da5b264d99c2ae66291b9df0943d6f8ac4b1743a5bef2caebaaa9fa1694c,PodSandboxId:f6231b23266daa7beda5c2eb7b84162e5fe7c14db8b3c9ddcd78304bf2ec722c,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,State:CONTAINER_EXITED,CreatedAt:1722900122673726758,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-044175,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5280d6dbae40883a3
4349dd31a13a779,},Annotations:map[string]string{io.kubernetes.container.hash: bd2d1b8f,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5537b3a8dbcb27d26dc336a48652fdd3385ec0fb3b5169e72e472a665bc2e3ed,PodSandboxId:0b1220acf56ca1985bed119e03dfdc76cb09d54439c45a2488b7b06933c1f3be,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,State:CONTAINER_RUNNING,CreatedAt:1722900122644546831,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-044175,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 87091e6c521c934e57911d0cd84fc454,},Ann
otations:map[string]string{io.kubernetes.container.hash: 7337c8d9,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d46bdf5c93d9a335000c2d92e3814610ae1e74850c28c7ec832821e7ed10c1b6,PodSandboxId:212b1287cb785d37bec039a02eceff99c8d4258dd1905092b47149fba9f31b8e,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1722900103405605088,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-g9bml,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: fd474413-e416-48db-a7bf-f3c40675819b,},Annotations:map[string]string{io.ku
bernetes.container.hash: 1bd67db4,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:14f7140ac408890dd788c7a9d6a9857531edad86ff751157ac035e6ab0d4afdc,PodSandboxId:1bf94d816bd6b0f9325f20c0b2453330291a5dfa79448419ddd925a97f951bb9,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_EXITED,CreatedAt:1722899618925272516,Labels:map[string]str
ing{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-fc5497c4f-wmfql,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: bfc8bad7-d43d-4beb-991e-339a4ce96ab5,},Annotations:map[string]string{io.kubernetes.container.hash: fc00d50e,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e65205c398221a15eecea1ec1092d54f364a44886b05149400c7be5ffafc3285,PodSandboxId:0df1c00cbbb9d6891997d631537dd7662e552d8dca3cea20f0b653ed34f6f7bb,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1722899473822035995,Labels:map[string]string{io.kubernetes.container.name: cor
edns,io.kubernetes.pod.name: coredns-7db6d8ff4d-vzhst,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f9c09745-be29-4403-9e7d-f9e4eaae5cac,},Annotations:map[string]string{io.kubernetes.container.hash: 1a8c310a,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:97fa319bea82614cab7525f9052bcc8a09fad765b260045dbf0d0fa0ca0290b2,PodSandboxId:4f369251bc6de76b6eba2d8a6404cb53a6bcba17f58bd09854de9edd65d080fa,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:docker.io/kindest/kindnetd@sha256:4067b91686869e19bac601aec305ba55d2e74cdcb91347869bfb4fd3a26cd3c3,Annotations:map[string]
string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:917d7814b9b5b870a04ef7bbee5414485a7ba8385be9c26e1df27463f6fb6557,State:CONTAINER_EXITED,CreatedAt:1722899461696983959,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-xqx4z,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8455705e-b140-4f1e-abff-6a71bbb5415f,},Annotations:map[string]string{io.kubernetes.container.hash: 4a9283b6,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:04c382fd4a32fe8685a6f643ecf7a291e4d542c2223975f9df92991fe566b12a,PodSandboxId:b7b77d3f5c8a24f9906eb41c479b7254cd21f7c4d0c34b7014bdfa5f666df829,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,Annotations:map[string]string{},UserSpecifiedImage:,Runtime
Handler:,},ImageRef:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,State:CONTAINER_EXITED,CreatedAt:1722899457757352731,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-vj5sd,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d6c9cdcb-e1b7-44c8-a6e3-5e5aeb76ba03,},Annotations:map[string]string{io.kubernetes.container.hash: a40979c9,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b0893967672c7dc591bbcf220e56601b8a46fc11f07e63adbadaddec59ec1803,PodSandboxId:c7f5da3aca5fb3bac198b9144677aac33c3f5317946dad29f46e726a35d2c596,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f06
2788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_EXITED,CreatedAt:1722899438287916526,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-044175,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 47fd3d59fe4024c671f4b57dbae12a83,},Annotations:map[string]string{io.kubernetes.container.hash: fa9a7bc3,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2a85f2254a23cdec7e89ff8de2e31b06ddf2853808330965760217f1fd834004,PodSandboxId:57dd6eb50740256e4db3c59d0c1d850b0ba784d01abbeb7f8ea139160576fc43,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795
e2,State:CONTAINER_EXITED,CreatedAt:1722899438266931166,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-044175,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 87091e6c521c934e57911d0cd84fc454,},Annotations:map[string]string{io.kubernetes.container.hash: 7337c8d9,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=45a17831-9154-4d47-9cb7-579370837150 name=/runtime.v1.RuntimeService/ListContainers
	Aug 05 23:25:16 ha-044175 crio[3920]: time="2024-08-05 23:25:16.504789566Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=bd971fcd-3860-4022-a3fb-be90357fd99c name=/runtime.v1.RuntimeService/Version
	Aug 05 23:25:16 ha-044175 crio[3920]: time="2024-08-05 23:25:16.504887830Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=bd971fcd-3860-4022-a3fb-be90357fd99c name=/runtime.v1.RuntimeService/Version
	Aug 05 23:25:16 ha-044175 crio[3920]: time="2024-08-05 23:25:16.506063055Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=e4e76382-30b6-4281-a003-d88813dbf15b name=/runtime.v1.ImageService/ImageFsInfo
	Aug 05 23:25:16 ha-044175 crio[3920]: time="2024-08-05 23:25:16.506580948Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1722900316506553205,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:154770,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=e4e76382-30b6-4281-a003-d88813dbf15b name=/runtime.v1.ImageService/ImageFsInfo
	Aug 05 23:25:16 ha-044175 crio[3920]: time="2024-08-05 23:25:16.507193410Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=38265a3e-62f3-4d71-82eb-eb1935bd3f9c name=/runtime.v1.RuntimeService/ListContainers
	Aug 05 23:25:16 ha-044175 crio[3920]: time="2024-08-05 23:25:16.507258799Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=38265a3e-62f3-4d71-82eb-eb1935bd3f9c name=/runtime.v1.RuntimeService/ListContainers
	Aug 05 23:25:16 ha-044175 crio[3920]: time="2024-08-05 23:25:16.507696994Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:fd7e94739a082d7384a7b998066384667954ebe9cc11847395a104db1a104317,PodSandboxId:77ac7fe6a83e0516a216fd1d55d638ed87cfcdf5723e5e28856ee5df04b14760,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:4,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1722900187738951526,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d30d1a5b-cfbe-4de6-a964-75c32e5dbf62,},Annotations:map[string]string{io.kubernetes.container.hash: 4378961a,io.kubernetes.container.restartCount: 4,io.kubernetes.container.t
erminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6528b925e75da18994cd673a201712eb241eeff865202c130034f40f0a350bb8,PodSandboxId:0f530473c6518daba2504d48da181c58689c44ffd19685987529bd79bbfdd8bf,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,State:CONTAINER_RUNNING,CreatedAt:1722900169724492024,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-044175,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: de889c914a63f88b5552d92d7c04005b,},Annotations:map[string]string{io.kubernetes.container.hash: bcb4918f,io.kubernetes.container.restartCount: 2,io.kubernetes.conta
iner.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7cf9b7cb63859c9cfe968fc20b9dacecfc681905714bc14a19a78ba20314f787,PodSandboxId:f6231b23266daa7beda5c2eb7b84162e5fe7c14db8b3c9ddcd78304bf2ec722c,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:3,},Image:&ImageSpec{Image:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,State:CONTAINER_RUNNING,CreatedAt:1722900162729981582,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-044175,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5280d6dbae40883a34349dd31a13a779,},Annotations:map[string]string{io.kubernetes.container.hash: bd2d1b8f,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessa
gePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:77b449aa0776d43116dbd794f0249b6e5fc5d747d7f6a8bc9604aebafc20ba74,PodSandboxId:2ff7308f4be3e77295c107b65333964734b52e07163e7f28b5c122b5225d1d4d,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1722900156038540995,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-fc5497c4f-wmfql,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: bfc8bad7-d43d-4beb-991e-339a4ce96ab5,},Annotations:map[string]string{io.kubernetes.container.hash: fc00d50e,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kube
rnetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:91055df10dc934cc3b2614f239bef7e465aa9809f34bba79c6de90604d74f7ca,PodSandboxId:68d4fc648e15948920c68a4aad97654ab8e34af2ae6e4e2ecdd3c173abf8148d,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:0,},Image:&ImageSpec{Image:38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12,State:CONTAINER_RUNNING,CreatedAt:1722900137200072773,Labels:map[string]string{io.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-044175,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: fd673cb8fe1efcc8b643555b76eaad93,},Annotations:map[string]string{io.kubernetes.container.hash: 3d55e6dc,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePoli
cy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a11b5de6fd020c3af228be69825e370ecef21ab78d774519dac722cf721bb6e6,PodSandboxId:3f0c789e63c6b8da2eaddf246bf22fac58253370f7977c637db0653e6efb8ad4,Metadata:&ContainerMetadata{Name:coredns,Attempt:2,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1722900124470501686,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-g9bml,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: fd474413-e416-48db-a7bf-f3c40675819b,},Annotations:map[string]string{io.kubernetes.container.hash: 1bd67db4,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"cont
ainerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:224c4967d5e92ceb088df11f70040bbd62d3bf073b04182cb32278b2db2419b1,PodSandboxId:77ac7fe6a83e0516a216fd1d55d638ed87cfcdf5723e5e28856ee5df04b14760,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:3,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1722900122857826642,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d30d1a5b-cfbe-4de6-a964-75c32e5dbf62,},Annotations:map[string]string{io.kubernete
s.container.hash: 4378961a,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:97768d7c5371dd0c06071b82c8baadd28ee604281812facf0dbd4a723ea92274,PodSandboxId:b949dd01383277f7e3efd577b7b6302bc9888e365a2106061d4a3a3119168a36,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:1,},Image:&ImageSpec{Image:917d7814b9b5b870a04ef7bbee5414485a7ba8385be9c26e1df27463f6fb6557,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:917d7814b9b5b870a04ef7bbee5414485a7ba8385be9c26e1df27463f6fb6557,State:CONTAINER_RUNNING,CreatedAt:1722900122962106602,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-xqx4z,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8455705e-b140-4f1e-abff-6a71bbb5415f,},Annotations:map[string]string{io.kubernetes.container.hash: 4a9283b6,io.kube
rnetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8aecc482892c69f412b19a67ecbfb961e4799ff113afee62cf254d8accc9e43a,PodSandboxId:e82fdb05fd230a5ff78128ae533e9617633b9f37f9a0671378ee9706bc2188c5,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1722900122848074225,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-vzhst,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f9c09745-be29-4403-9e7d-f9e4eaae5cac,},Annotations:map[string]string{io.kubernetes.container.hash: 1a8c310a,io.kubernetes.container.ports: [{\"nam
e\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:da62836e55aaaf8eee39a34113a3d41ba6489986d26134bed80020f8c7164507,PodSandboxId:5d40c713023d2ce8f1fd3f024181a8566c041373be34fbbdd28a7966391af628,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1722900122740850436,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-044175,io.kubernetes.pod.namespace: ku
be-system,io.kubernetes.pod.uid: 47fd3d59fe4024c671f4b57dbae12a83,},Annotations:map[string]string{io.kubernetes.container.hash: fa9a7bc3,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5f43d5e7445c285e5783c937039be219df8aaea8c9db899259f8d24c895a378c,PodSandboxId:1e5c99969ac60dcfb40f80c63b009c0e6efc07de9fccdd5c48b9097ed4f8bf63,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,State:CONTAINER_RUNNING,CreatedAt:1722900122542678745,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-vj5sd,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d6c9
cdcb-e1b7-44c8-a6e3-5e5aeb76ba03,},Annotations:map[string]string{io.kubernetes.container.hash: a40979c9,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:dd436770dad332628ad6a3b7fea663d52dda62901d07f6c1bfa5cf82ddae4f61,PodSandboxId:0f530473c6518daba2504d48da181c58689c44ffd19685987529bd79bbfdd8bf,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,State:CONTAINER_EXITED,CreatedAt:1722900122697717291,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-044175,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.
uid: de889c914a63f88b5552d92d7c04005b,},Annotations:map[string]string{io.kubernetes.container.hash: bcb4918f,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:95d6da5b264d99c2ae66291b9df0943d6f8ac4b1743a5bef2caebaaa9fa1694c,PodSandboxId:f6231b23266daa7beda5c2eb7b84162e5fe7c14db8b3c9ddcd78304bf2ec722c,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,State:CONTAINER_EXITED,CreatedAt:1722900122673726758,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-044175,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5280d6dbae40883a3
4349dd31a13a779,},Annotations:map[string]string{io.kubernetes.container.hash: bd2d1b8f,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5537b3a8dbcb27d26dc336a48652fdd3385ec0fb3b5169e72e472a665bc2e3ed,PodSandboxId:0b1220acf56ca1985bed119e03dfdc76cb09d54439c45a2488b7b06933c1f3be,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,State:CONTAINER_RUNNING,CreatedAt:1722900122644546831,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-044175,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 87091e6c521c934e57911d0cd84fc454,},Ann
otations:map[string]string{io.kubernetes.container.hash: 7337c8d9,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d46bdf5c93d9a335000c2d92e3814610ae1e74850c28c7ec832821e7ed10c1b6,PodSandboxId:212b1287cb785d37bec039a02eceff99c8d4258dd1905092b47149fba9f31b8e,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1722900103405605088,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-g9bml,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: fd474413-e416-48db-a7bf-f3c40675819b,},Annotations:map[string]string{io.ku
bernetes.container.hash: 1bd67db4,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:14f7140ac408890dd788c7a9d6a9857531edad86ff751157ac035e6ab0d4afdc,PodSandboxId:1bf94d816bd6b0f9325f20c0b2453330291a5dfa79448419ddd925a97f951bb9,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_EXITED,CreatedAt:1722899618925272516,Labels:map[string]str
ing{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-fc5497c4f-wmfql,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: bfc8bad7-d43d-4beb-991e-339a4ce96ab5,},Annotations:map[string]string{io.kubernetes.container.hash: fc00d50e,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e65205c398221a15eecea1ec1092d54f364a44886b05149400c7be5ffafc3285,PodSandboxId:0df1c00cbbb9d6891997d631537dd7662e552d8dca3cea20f0b653ed34f6f7bb,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1722899473822035995,Labels:map[string]string{io.kubernetes.container.name: cor
edns,io.kubernetes.pod.name: coredns-7db6d8ff4d-vzhst,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f9c09745-be29-4403-9e7d-f9e4eaae5cac,},Annotations:map[string]string{io.kubernetes.container.hash: 1a8c310a,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:97fa319bea82614cab7525f9052bcc8a09fad765b260045dbf0d0fa0ca0290b2,PodSandboxId:4f369251bc6de76b6eba2d8a6404cb53a6bcba17f58bd09854de9edd65d080fa,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:docker.io/kindest/kindnetd@sha256:4067b91686869e19bac601aec305ba55d2e74cdcb91347869bfb4fd3a26cd3c3,Annotations:map[string]
string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:917d7814b9b5b870a04ef7bbee5414485a7ba8385be9c26e1df27463f6fb6557,State:CONTAINER_EXITED,CreatedAt:1722899461696983959,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-xqx4z,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8455705e-b140-4f1e-abff-6a71bbb5415f,},Annotations:map[string]string{io.kubernetes.container.hash: 4a9283b6,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:04c382fd4a32fe8685a6f643ecf7a291e4d542c2223975f9df92991fe566b12a,PodSandboxId:b7b77d3f5c8a24f9906eb41c479b7254cd21f7c4d0c34b7014bdfa5f666df829,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,Annotations:map[string]string{},UserSpecifiedImage:,Runtime
Handler:,},ImageRef:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,State:CONTAINER_EXITED,CreatedAt:1722899457757352731,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-vj5sd,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d6c9cdcb-e1b7-44c8-a6e3-5e5aeb76ba03,},Annotations:map[string]string{io.kubernetes.container.hash: a40979c9,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b0893967672c7dc591bbcf220e56601b8a46fc11f07e63adbadaddec59ec1803,PodSandboxId:c7f5da3aca5fb3bac198b9144677aac33c3f5317946dad29f46e726a35d2c596,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f06
2788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_EXITED,CreatedAt:1722899438287916526,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-044175,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 47fd3d59fe4024c671f4b57dbae12a83,},Annotations:map[string]string{io.kubernetes.container.hash: fa9a7bc3,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2a85f2254a23cdec7e89ff8de2e31b06ddf2853808330965760217f1fd834004,PodSandboxId:57dd6eb50740256e4db3c59d0c1d850b0ba784d01abbeb7f8ea139160576fc43,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795
e2,State:CONTAINER_EXITED,CreatedAt:1722899438266931166,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-044175,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 87091e6c521c934e57911d0cd84fc454,},Annotations:map[string]string{io.kubernetes.container.hash: 7337c8d9,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=38265a3e-62f3-4d71-82eb-eb1935bd3f9c name=/runtime.v1.RuntimeService/ListContainers
	Aug 05 23:25:16 ha-044175 crio[3920]: time="2024-08-05 23:25:16.559098568Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=26cd8e0b-4181-4618-9910-00b249e99fec name=/runtime.v1.RuntimeService/Version
	Aug 05 23:25:16 ha-044175 crio[3920]: time="2024-08-05 23:25:16.559172595Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=26cd8e0b-4181-4618-9910-00b249e99fec name=/runtime.v1.RuntimeService/Version
	Aug 05 23:25:16 ha-044175 crio[3920]: time="2024-08-05 23:25:16.560850037Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=bca3571b-2d62-4543-abb6-b83ebb55cd4d name=/runtime.v1.ImageService/ImageFsInfo
	Aug 05 23:25:16 ha-044175 crio[3920]: time="2024-08-05 23:25:16.561789768Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1722900316561760238,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:154770,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=bca3571b-2d62-4543-abb6-b83ebb55cd4d name=/runtime.v1.ImageService/ImageFsInfo
	Aug 05 23:25:16 ha-044175 crio[3920]: time="2024-08-05 23:25:16.562667420Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=90dbb306-34a1-463c-93b4-cfdc019481d7 name=/runtime.v1.RuntimeService/ListContainers
	Aug 05 23:25:16 ha-044175 crio[3920]: time="2024-08-05 23:25:16.562724710Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=90dbb306-34a1-463c-93b4-cfdc019481d7 name=/runtime.v1.RuntimeService/ListContainers
	Aug 05 23:25:16 ha-044175 crio[3920]: time="2024-08-05 23:25:16.563141163Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:fd7e94739a082d7384a7b998066384667954ebe9cc11847395a104db1a104317,PodSandboxId:77ac7fe6a83e0516a216fd1d55d638ed87cfcdf5723e5e28856ee5df04b14760,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:4,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1722900187738951526,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d30d1a5b-cfbe-4de6-a964-75c32e5dbf62,},Annotations:map[string]string{io.kubernetes.container.hash: 4378961a,io.kubernetes.container.restartCount: 4,io.kubernetes.container.t
erminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6528b925e75da18994cd673a201712eb241eeff865202c130034f40f0a350bb8,PodSandboxId:0f530473c6518daba2504d48da181c58689c44ffd19685987529bd79bbfdd8bf,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,State:CONTAINER_RUNNING,CreatedAt:1722900169724492024,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-044175,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: de889c914a63f88b5552d92d7c04005b,},Annotations:map[string]string{io.kubernetes.container.hash: bcb4918f,io.kubernetes.container.restartCount: 2,io.kubernetes.conta
iner.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7cf9b7cb63859c9cfe968fc20b9dacecfc681905714bc14a19a78ba20314f787,PodSandboxId:f6231b23266daa7beda5c2eb7b84162e5fe7c14db8b3c9ddcd78304bf2ec722c,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:3,},Image:&ImageSpec{Image:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,State:CONTAINER_RUNNING,CreatedAt:1722900162729981582,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-044175,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5280d6dbae40883a34349dd31a13a779,},Annotations:map[string]string{io.kubernetes.container.hash: bd2d1b8f,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessa
gePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:77b449aa0776d43116dbd794f0249b6e5fc5d747d7f6a8bc9604aebafc20ba74,PodSandboxId:2ff7308f4be3e77295c107b65333964734b52e07163e7f28b5c122b5225d1d4d,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1722900156038540995,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-fc5497c4f-wmfql,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: bfc8bad7-d43d-4beb-991e-339a4ce96ab5,},Annotations:map[string]string{io.kubernetes.container.hash: fc00d50e,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kube
rnetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:91055df10dc934cc3b2614f239bef7e465aa9809f34bba79c6de90604d74f7ca,PodSandboxId:68d4fc648e15948920c68a4aad97654ab8e34af2ae6e4e2ecdd3c173abf8148d,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:0,},Image:&ImageSpec{Image:38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12,State:CONTAINER_RUNNING,CreatedAt:1722900137200072773,Labels:map[string]string{io.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-044175,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: fd673cb8fe1efcc8b643555b76eaad93,},Annotations:map[string]string{io.kubernetes.container.hash: 3d55e6dc,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePoli
cy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a11b5de6fd020c3af228be69825e370ecef21ab78d774519dac722cf721bb6e6,PodSandboxId:3f0c789e63c6b8da2eaddf246bf22fac58253370f7977c637db0653e6efb8ad4,Metadata:&ContainerMetadata{Name:coredns,Attempt:2,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1722900124470501686,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-g9bml,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: fd474413-e416-48db-a7bf-f3c40675819b,},Annotations:map[string]string{io.kubernetes.container.hash: 1bd67db4,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"cont
ainerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:224c4967d5e92ceb088df11f70040bbd62d3bf073b04182cb32278b2db2419b1,PodSandboxId:77ac7fe6a83e0516a216fd1d55d638ed87cfcdf5723e5e28856ee5df04b14760,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:3,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1722900122857826642,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d30d1a5b-cfbe-4de6-a964-75c32e5dbf62,},Annotations:map[string]string{io.kubernete
s.container.hash: 4378961a,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:97768d7c5371dd0c06071b82c8baadd28ee604281812facf0dbd4a723ea92274,PodSandboxId:b949dd01383277f7e3efd577b7b6302bc9888e365a2106061d4a3a3119168a36,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:1,},Image:&ImageSpec{Image:917d7814b9b5b870a04ef7bbee5414485a7ba8385be9c26e1df27463f6fb6557,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:917d7814b9b5b870a04ef7bbee5414485a7ba8385be9c26e1df27463f6fb6557,State:CONTAINER_RUNNING,CreatedAt:1722900122962106602,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-xqx4z,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8455705e-b140-4f1e-abff-6a71bbb5415f,},Annotations:map[string]string{io.kubernetes.container.hash: 4a9283b6,io.kube
rnetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8aecc482892c69f412b19a67ecbfb961e4799ff113afee62cf254d8accc9e43a,PodSandboxId:e82fdb05fd230a5ff78128ae533e9617633b9f37f9a0671378ee9706bc2188c5,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1722900122848074225,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-vzhst,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f9c09745-be29-4403-9e7d-f9e4eaae5cac,},Annotations:map[string]string{io.kubernetes.container.hash: 1a8c310a,io.kubernetes.container.ports: [{\"nam
e\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:da62836e55aaaf8eee39a34113a3d41ba6489986d26134bed80020f8c7164507,PodSandboxId:5d40c713023d2ce8f1fd3f024181a8566c041373be34fbbdd28a7966391af628,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1722900122740850436,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-044175,io.kubernetes.pod.namespace: ku
be-system,io.kubernetes.pod.uid: 47fd3d59fe4024c671f4b57dbae12a83,},Annotations:map[string]string{io.kubernetes.container.hash: fa9a7bc3,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5f43d5e7445c285e5783c937039be219df8aaea8c9db899259f8d24c895a378c,PodSandboxId:1e5c99969ac60dcfb40f80c63b009c0e6efc07de9fccdd5c48b9097ed4f8bf63,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,State:CONTAINER_RUNNING,CreatedAt:1722900122542678745,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-vj5sd,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d6c9
cdcb-e1b7-44c8-a6e3-5e5aeb76ba03,},Annotations:map[string]string{io.kubernetes.container.hash: a40979c9,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:dd436770dad332628ad6a3b7fea663d52dda62901d07f6c1bfa5cf82ddae4f61,PodSandboxId:0f530473c6518daba2504d48da181c58689c44ffd19685987529bd79bbfdd8bf,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,State:CONTAINER_EXITED,CreatedAt:1722900122697717291,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-044175,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.
uid: de889c914a63f88b5552d92d7c04005b,},Annotations:map[string]string{io.kubernetes.container.hash: bcb4918f,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:95d6da5b264d99c2ae66291b9df0943d6f8ac4b1743a5bef2caebaaa9fa1694c,PodSandboxId:f6231b23266daa7beda5c2eb7b84162e5fe7c14db8b3c9ddcd78304bf2ec722c,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,State:CONTAINER_EXITED,CreatedAt:1722900122673726758,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-044175,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5280d6dbae40883a3
4349dd31a13a779,},Annotations:map[string]string{io.kubernetes.container.hash: bd2d1b8f,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5537b3a8dbcb27d26dc336a48652fdd3385ec0fb3b5169e72e472a665bc2e3ed,PodSandboxId:0b1220acf56ca1985bed119e03dfdc76cb09d54439c45a2488b7b06933c1f3be,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,State:CONTAINER_RUNNING,CreatedAt:1722900122644546831,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-044175,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 87091e6c521c934e57911d0cd84fc454,},Ann
otations:map[string]string{io.kubernetes.container.hash: 7337c8d9,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d46bdf5c93d9a335000c2d92e3814610ae1e74850c28c7ec832821e7ed10c1b6,PodSandboxId:212b1287cb785d37bec039a02eceff99c8d4258dd1905092b47149fba9f31b8e,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1722900103405605088,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-g9bml,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: fd474413-e416-48db-a7bf-f3c40675819b,},Annotations:map[string]string{io.ku
bernetes.container.hash: 1bd67db4,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:14f7140ac408890dd788c7a9d6a9857531edad86ff751157ac035e6ab0d4afdc,PodSandboxId:1bf94d816bd6b0f9325f20c0b2453330291a5dfa79448419ddd925a97f951bb9,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_EXITED,CreatedAt:1722899618925272516,Labels:map[string]str
ing{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-fc5497c4f-wmfql,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: bfc8bad7-d43d-4beb-991e-339a4ce96ab5,},Annotations:map[string]string{io.kubernetes.container.hash: fc00d50e,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e65205c398221a15eecea1ec1092d54f364a44886b05149400c7be5ffafc3285,PodSandboxId:0df1c00cbbb9d6891997d631537dd7662e552d8dca3cea20f0b653ed34f6f7bb,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1722899473822035995,Labels:map[string]string{io.kubernetes.container.name: cor
edns,io.kubernetes.pod.name: coredns-7db6d8ff4d-vzhst,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f9c09745-be29-4403-9e7d-f9e4eaae5cac,},Annotations:map[string]string{io.kubernetes.container.hash: 1a8c310a,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:97fa319bea82614cab7525f9052bcc8a09fad765b260045dbf0d0fa0ca0290b2,PodSandboxId:4f369251bc6de76b6eba2d8a6404cb53a6bcba17f58bd09854de9edd65d080fa,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:docker.io/kindest/kindnetd@sha256:4067b91686869e19bac601aec305ba55d2e74cdcb91347869bfb4fd3a26cd3c3,Annotations:map[string]
string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:917d7814b9b5b870a04ef7bbee5414485a7ba8385be9c26e1df27463f6fb6557,State:CONTAINER_EXITED,CreatedAt:1722899461696983959,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-xqx4z,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8455705e-b140-4f1e-abff-6a71bbb5415f,},Annotations:map[string]string{io.kubernetes.container.hash: 4a9283b6,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:04c382fd4a32fe8685a6f643ecf7a291e4d542c2223975f9df92991fe566b12a,PodSandboxId:b7b77d3f5c8a24f9906eb41c479b7254cd21f7c4d0c34b7014bdfa5f666df829,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,Annotations:map[string]string{},UserSpecifiedImage:,Runtime
Handler:,},ImageRef:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,State:CONTAINER_EXITED,CreatedAt:1722899457757352731,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-vj5sd,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d6c9cdcb-e1b7-44c8-a6e3-5e5aeb76ba03,},Annotations:map[string]string{io.kubernetes.container.hash: a40979c9,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b0893967672c7dc591bbcf220e56601b8a46fc11f07e63adbadaddec59ec1803,PodSandboxId:c7f5da3aca5fb3bac198b9144677aac33c3f5317946dad29f46e726a35d2c596,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f06
2788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_EXITED,CreatedAt:1722899438287916526,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-044175,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 47fd3d59fe4024c671f4b57dbae12a83,},Annotations:map[string]string{io.kubernetes.container.hash: fa9a7bc3,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2a85f2254a23cdec7e89ff8de2e31b06ddf2853808330965760217f1fd834004,PodSandboxId:57dd6eb50740256e4db3c59d0c1d850b0ba784d01abbeb7f8ea139160576fc43,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795
e2,State:CONTAINER_EXITED,CreatedAt:1722899438266931166,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-044175,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 87091e6c521c934e57911d0cd84fc454,},Annotations:map[string]string{io.kubernetes.container.hash: 7337c8d9,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=90dbb306-34a1-463c-93b4-cfdc019481d7 name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                 CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	fd7e94739a082       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                      2 minutes ago       Running             storage-provisioner       4                   77ac7fe6a83e0       storage-provisioner
	6528b925e75da       76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e                                      2 minutes ago       Running             kube-controller-manager   2                   0f530473c6518       kube-controller-manager-ha-044175
	7cf9b7cb63859       1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d                                      2 minutes ago       Running             kube-apiserver            3                   f6231b23266da       kube-apiserver-ha-044175
	77b449aa0776d       8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a                                      2 minutes ago       Running             busybox                   1                   2ff7308f4be3e       busybox-fc5497c4f-wmfql
	91055df10dc93       38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12                                      2 minutes ago       Running             kube-vip                  0                   68d4fc648e159       kube-vip-ha-044175
	a11b5de6fd020       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4                                      3 minutes ago       Running             coredns                   2                   3f0c789e63c6b       coredns-7db6d8ff4d-g9bml
	97768d7c5371d       917d7814b9b5b870a04ef7bbee5414485a7ba8385be9c26e1df27463f6fb6557                                      3 minutes ago       Running             kindnet-cni               1                   b949dd0138327       kindnet-xqx4z
	224c4967d5e92       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                      3 minutes ago       Exited              storage-provisioner       3                   77ac7fe6a83e0       storage-provisioner
	8aecc482892c6       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4                                      3 minutes ago       Running             coredns                   1                   e82fdb05fd230       coredns-7db6d8ff4d-vzhst
	da62836e55aaa       3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899                                      3 minutes ago       Running             etcd                      1                   5d40c713023d2       etcd-ha-044175
	dd436770dad33       76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e                                      3 minutes ago       Exited              kube-controller-manager   1                   0f530473c6518       kube-controller-manager-ha-044175
	95d6da5b264d9       1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d                                      3 minutes ago       Exited              kube-apiserver            2                   f6231b23266da       kube-apiserver-ha-044175
	5537b3a8dbcb2       3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2                                      3 minutes ago       Running             kube-scheduler            1                   0b1220acf56ca       kube-scheduler-ha-044175
	5f43d5e7445c2       55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1                                      3 minutes ago       Running             kube-proxy                1                   1e5c99969ac60       kube-proxy-vj5sd
	d46bdf5c93d9a       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4                                      3 minutes ago       Exited              coredns                   1                   212b1287cb785       coredns-7db6d8ff4d-g9bml
	14f7140ac4088       gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335   11 minutes ago      Exited              busybox                   0                   1bf94d816bd6b       busybox-fc5497c4f-wmfql
	e65205c398221       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4                                      14 minutes ago      Exited              coredns                   0                   0df1c00cbbb9d       coredns-7db6d8ff4d-vzhst
	97fa319bea826       docker.io/kindest/kindnetd@sha256:4067b91686869e19bac601aec305ba55d2e74cdcb91347869bfb4fd3a26cd3c3    14 minutes ago      Exited              kindnet-cni               0                   4f369251bc6de       kindnet-xqx4z
	04c382fd4a32f       55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1                                      14 minutes ago      Exited              kube-proxy                0                   b7b77d3f5c8a2       kube-proxy-vj5sd
	b0893967672c7       3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899                                      14 minutes ago      Exited              etcd                      0                   c7f5da3aca5fb       etcd-ha-044175
	2a85f2254a23c       3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2                                      14 minutes ago      Exited              kube-scheduler            0                   57dd6eb507402       kube-scheduler-ha-044175
	
	
	==> coredns [8aecc482892c69f412b19a67ecbfb961e4799ff113afee62cf254d8accc9e43a] <==
	Trace[448270504]: [10.796676678s] [10.796676678s] END
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused - error from a previous attempt: read tcp 10.244.0.6:44896->10.96.0.1:443: read: connection reset by peer
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host - error from a previous attempt: read tcp 10.244.0.6:46242->10.96.0.1:443: read: connection reset by peer
	[INFO] plugin/kubernetes: Trace[1977882846]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231 (05-Aug-2024 23:22:17.552) (total time: 10490ms):
	Trace[1977882846]: ---"Objects listed" error:Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host - error from a previous attempt: read tcp 10.244.0.6:46242->10.96.0.1:443: read: connection reset by peer 10490ms (23:22:28.043)
	Trace[1977882846]: [10.49098567s] [10.49098567s] END
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host - error from a previous attempt: read tcp 10.244.0.6:46242->10.96.0.1:443: read: connection reset by peer
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	
	
	==> coredns [a11b5de6fd020c3af228be69825e370ecef21ab78d774519dac722cf721bb6e6] <==
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused - error from a previous attempt: read tcp 10.244.0.7:47698->10.96.0.1:443: read: connection reset by peer
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused - error from a previous attempt: read tcp 10.244.0.7:47698->10.96.0.1:443: read: connection reset by peer
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	
	
	==> coredns [d46bdf5c93d9a335000c2d92e3814610ae1e74850c28c7ec832821e7ed10c1b6] <==
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 6c8bd46af3d98e03c4ae8e438c65dd0c69a5f817565481bcf1725dd66ff794963b7938c81e3a23d4c2ad9e52f818076e819219c79e8007dd90564767ed68ba4c
	CoreDNS-1.11.1
	linux/amd64, go1.20.7, ae2bbc2
	[INFO] plugin/health: Going into lameduck mode for 5s
	[INFO] 127.0.0.1:59273 - 2512 "HINFO IN 3207962830486949060.9184539038836446459. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.014088898s
	
	
	==> coredns [e65205c398221a15eecea1ec1092d54f364a44886b05149400c7be5ffafc3285] <==
	[INFO] 10.244.2.2:56153 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.00013893s
	[INFO] 10.244.1.2:33342 - 3 "AAAA IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.001850863s
	[INFO] 10.244.1.2:42287 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000148733s
	[INFO] 10.244.1.2:54735 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000100517s
	[INFO] 10.244.1.2:59789 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.001317452s
	[INFO] 10.244.0.4:40404 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000074048s
	[INFO] 10.244.0.4:48828 - 3 "AAAA IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.002066965s
	[INFO] 10.244.0.4:45447 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000152682s
	[INFO] 10.244.2.2:44344 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000146254s
	[INFO] 10.244.2.2:44960 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000197937s
	[INFO] 10.244.1.2:46098 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000107825s
	[INFO] 10.244.0.4:53114 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000104641s
	[INFO] 10.244.0.4:55920 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000073557s
	[INFO] 10.244.2.2:36832 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.0001192s
	[INFO] 10.244.2.2:36836 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.00014154s
	[INFO] 10.244.1.2:35009 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.00021099s
	[INFO] 10.244.1.2:49630 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.00009192s
	[INFO] 10.244.1.2:49164 - 5 "PTR IN 1.39.168.192.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.000128354s
	[INFO] 10.244.0.4:33938 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000080255s
	[INFO] 10.244.0.4:34551 - 5 "PTR IN 1.39.168.192.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.000092007s
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.EndpointSlice: the server has asked for the client to provide credentials (get endpointslices.discovery.k8s.io) - error from a previous attempt: http2: server sent GOAWAY and closed the connection; LastStreamID=21, ErrCode=NO_ERROR, debug=""
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Service: the server has asked for the client to provide credentials (get services) - error from a previous attempt: http2: server sent GOAWAY and closed the connection; LastStreamID=21, ErrCode=NO_ERROR, debug=""
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Namespace: the server has asked for the client to provide credentials (get namespaces) - error from a previous attempt: http2: server sent GOAWAY and closed the connection; LastStreamID=21, ErrCode=NO_ERROR, debug=""
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/health: Going into lameduck mode for 5s
	
	
	==> describe nodes <==
	Name:               ha-044175
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-044175
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=a179f531dd2dbe55e0d6074abcbc378280f91bb4
	                    minikube.k8s.io/name=ha-044175
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_08_05T23_10_45_0700
	                    minikube.k8s.io/version=v1.33.1
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 05 Aug 2024 23:10:44 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-044175
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 05 Aug 2024 23:25:08 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 05 Aug 2024 23:22:44 +0000   Mon, 05 Aug 2024 23:10:44 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 05 Aug 2024 23:22:44 +0000   Mon, 05 Aug 2024 23:10:44 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 05 Aug 2024 23:22:44 +0000   Mon, 05 Aug 2024 23:10:44 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 05 Aug 2024 23:22:44 +0000   Mon, 05 Aug 2024 23:11:13 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.57
	  Hostname:    ha-044175
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 a7535c9f09f54963b658b49234079761
	  System UUID:                a7535c9f-09f5-4963-b658-b49234079761
	  Boot ID:                    97ae6699-97e9-4260-9f54-aa4546b6e1f0
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.30.3
	  Kube-Proxy Version:         v1.30.3
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (11 in total)
	  Namespace                   Name                                 CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                 ------------  ----------  ---------------  -------------  ---
	  default                     busybox-fc5497c4f-wmfql              0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         11m
	  kube-system                 coredns-7db6d8ff4d-g9bml             100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     14m
	  kube-system                 coredns-7db6d8ff4d-vzhst             100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     14m
	  kube-system                 etcd-ha-044175                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (4%!)(MISSING)       0 (0%!)(MISSING)         14m
	  kube-system                 kindnet-xqx4z                        100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (2%!)(MISSING)        50Mi (2%!)(MISSING)      14m
	  kube-system                 kube-apiserver-ha-044175             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         14m
	  kube-system                 kube-controller-manager-ha-044175    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         14m
	  kube-system                 kube-proxy-vj5sd                     0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         14m
	  kube-system                 kube-scheduler-ha-044175             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         14m
	  kube-system                 kube-vip-ha-044175                   0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         103s
	  kube-system                 storage-provisioner                  0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         14m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                950m (47%!)(MISSING)   100m (5%!)(MISSING)
	  memory             290Mi (13%!)(MISSING)  390Mi (18%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)       0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)       0 (0%!)(MISSING)
	Events:
	  Type     Reason                   Age                    From             Message
	  ----     ------                   ----                   ----             -------
	  Normal   Starting                 14m                    kube-proxy       
	  Normal   Starting                 2m32s                  kube-proxy       
	  Normal   NodeHasNoDiskPressure    14m                    kubelet          Node ha-044175 status is now: NodeHasNoDiskPressure
	  Normal   Starting                 14m                    kubelet          Starting kubelet.
	  Normal   NodeAllocatableEnforced  14m                    kubelet          Updated Node Allocatable limit across pods
	  Normal   NodeHasSufficientMemory  14m                    kubelet          Node ha-044175 status is now: NodeHasSufficientMemory
	  Normal   NodeHasSufficientPID     14m                    kubelet          Node ha-044175 status is now: NodeHasSufficientPID
	  Normal   RegisteredNode           14m                    node-controller  Node ha-044175 event: Registered Node ha-044175 in Controller
	  Normal   NodeReady                14m                    kubelet          Node ha-044175 status is now: NodeReady
	  Normal   RegisteredNode           13m                    node-controller  Node ha-044175 event: Registered Node ha-044175 in Controller
	  Normal   RegisteredNode           11m                    node-controller  Node ha-044175 event: Registered Node ha-044175 in Controller
	  Warning  ContainerGCFailed        3m33s (x2 over 4m33s)  kubelet          rpc error: code = Unavailable desc = connection error: desc = "transport: Error while dialing: dial unix /var/run/crio/crio.sock: connect: no such file or directory"
	  Normal   RegisteredNode           2m28s                  node-controller  Node ha-044175 event: Registered Node ha-044175 in Controller
	  Normal   RegisteredNode           2m16s                  node-controller  Node ha-044175 event: Registered Node ha-044175 in Controller
	  Normal   RegisteredNode           32s                    node-controller  Node ha-044175 event: Registered Node ha-044175 in Controller
	
	
	Name:               ha-044175-m02
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-044175-m02
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=a179f531dd2dbe55e0d6074abcbc378280f91bb4
	                    minikube.k8s.io/name=ha-044175
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_08_05T23_11_52_0700
	                    minikube.k8s.io/version=v1.33.1
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 05 Aug 2024 23:11:49 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-044175-m02
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 05 Aug 2024 23:25:09 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 05 Aug 2024 23:23:27 +0000   Mon, 05 Aug 2024 23:22:46 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 05 Aug 2024 23:23:27 +0000   Mon, 05 Aug 2024 23:22:46 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 05 Aug 2024 23:23:27 +0000   Mon, 05 Aug 2024 23:22:46 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 05 Aug 2024 23:23:27 +0000   Mon, 05 Aug 2024 23:22:46 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.112
	  Hostname:    ha-044175-m02
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 3b8a8f60868345a4bc1ba1393dbdecaf
	  System UUID:                3b8a8f60-8683-45a4-bc1b-a1393dbdecaf
	  Boot ID:                    71e8903c-f0e2-496b-815e-23868eec6c11
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.30.3
	  Kube-Proxy Version:         v1.30.3
	PodCIDR:                      10.244.1.0/24
	PodCIDRs:                     10.244.1.0/24
	Non-terminated Pods:          (8 in total)
	  Namespace                   Name                                     CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                     ------------  ----------  ---------------  -------------  ---
	  default                     busybox-fc5497c4f-tpqpw                  0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         11m
	  kube-system                 etcd-ha-044175-m02                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (4%!)(MISSING)       0 (0%!)(MISSING)         13m
	  kube-system                 kindnet-hqhgc                            100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (2%!)(MISSING)        50Mi (2%!)(MISSING)      13m
	  kube-system                 kube-apiserver-ha-044175-m02             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         13m
	  kube-system                 kube-controller-manager-ha-044175-m02    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         13m
	  kube-system                 kube-proxy-jfs9q                         0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         13m
	  kube-system                 kube-scheduler-ha-044175-m02             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         13m
	  kube-system                 kube-vip-ha-044175-m02                   0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         13m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%!)(MISSING)  100m (5%!)(MISSING)
	  memory             150Mi (7%!)(MISSING)  50Mi (2%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                    From             Message
	  ----    ------                   ----                   ----             -------
	  Normal  Starting                 2m28s                  kube-proxy       
	  Normal  Starting                 13m                    kube-proxy       
	  Normal  NodeAllocatableEnforced  13m                    kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  13m (x8 over 13m)      kubelet          Node ha-044175-m02 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    13m (x8 over 13m)      kubelet          Node ha-044175-m02 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     13m (x7 over 13m)      kubelet          Node ha-044175-m02 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           13m                    node-controller  Node ha-044175-m02 event: Registered Node ha-044175-m02 in Controller
	  Normal  RegisteredNode           13m                    node-controller  Node ha-044175-m02 event: Registered Node ha-044175-m02 in Controller
	  Normal  RegisteredNode           11m                    node-controller  Node ha-044175-m02 event: Registered Node ha-044175-m02 in Controller
	  Normal  NodeNotReady             9m43s                  node-controller  Node ha-044175-m02 status is now: NodeNotReady
	  Normal  Starting                 2m59s                  kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  2m59s (x8 over 2m59s)  kubelet          Node ha-044175-m02 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    2m59s (x8 over 2m59s)  kubelet          Node ha-044175-m02 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     2m59s (x7 over 2m59s)  kubelet          Node ha-044175-m02 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  2m59s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           2m28s                  node-controller  Node ha-044175-m02 event: Registered Node ha-044175-m02 in Controller
	  Normal  RegisteredNode           2m16s                  node-controller  Node ha-044175-m02 event: Registered Node ha-044175-m02 in Controller
	  Normal  RegisteredNode           32s                    node-controller  Node ha-044175-m02 event: Registered Node ha-044175-m02 in Controller
	
	
	Name:               ha-044175-m03
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-044175-m03
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=a179f531dd2dbe55e0d6074abcbc378280f91bb4
	                    minikube.k8s.io/name=ha-044175
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_08_05T23_13_09_0700
	                    minikube.k8s.io/version=v1.33.1
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 05 Aug 2024 23:13:06 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-044175-m03
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 05 Aug 2024 23:25:09 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 05 Aug 2024 23:24:49 +0000   Mon, 05 Aug 2024 23:24:18 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 05 Aug 2024 23:24:49 +0000   Mon, 05 Aug 2024 23:24:18 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 05 Aug 2024 23:24:49 +0000   Mon, 05 Aug 2024 23:24:18 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 05 Aug 2024 23:24:49 +0000   Mon, 05 Aug 2024 23:24:18 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.201
	  Hostname:    ha-044175-m03
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 37d1f61608a14177b68f3f2d22a59a87
	  System UUID:                37d1f616-08a1-4177-b68f-3f2d22a59a87
	  Boot ID:                    5746a7be-82a5-4be1-9fd5-660d3c4d6c2f
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.30.3
	  Kube-Proxy Version:         v1.30.3
	PodCIDR:                      10.244.2.0/24
	PodCIDRs:                     10.244.2.0/24
	Non-terminated Pods:          (8 in total)
	  Namespace                   Name                                     CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                     ------------  ----------  ---------------  -------------  ---
	  default                     busybox-fc5497c4f-fqp2t                  0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         11m
	  kube-system                 etcd-ha-044175-m03                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (4%!)(MISSING)       0 (0%!)(MISSING)         12m
	  kube-system                 kindnet-mc7wf                            100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (2%!)(MISSING)        50Mi (2%!)(MISSING)      12m
	  kube-system                 kube-apiserver-ha-044175-m03             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         12m
	  kube-system                 kube-controller-manager-ha-044175-m03    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         12m
	  kube-system                 kube-proxy-4ql5l                         0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         12m
	  kube-system                 kube-scheduler-ha-044175-m03             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         12m
	  kube-system                 kube-vip-ha-044175-m03                   0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         12m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%!)(MISSING)  100m (5%!)(MISSING)
	  memory             150Mi (7%!)(MISSING)  50Mi (2%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	Events:
	  Type     Reason                   Age                From             Message
	  ----     ------                   ----               ----             -------
	  Normal   Starting                 12m                kube-proxy       
	  Normal   Starting                 41s                kube-proxy       
	  Normal   RegisteredNode           12m                node-controller  Node ha-044175-m03 event: Registered Node ha-044175-m03 in Controller
	  Normal   NodeHasSufficientMemory  12m (x8 over 12m)  kubelet          Node ha-044175-m03 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    12m (x8 over 12m)  kubelet          Node ha-044175-m03 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     12m (x7 over 12m)  kubelet          Node ha-044175-m03 status is now: NodeHasSufficientPID
	  Normal   NodeAllocatableEnforced  12m                kubelet          Updated Node Allocatable limit across pods
	  Normal   RegisteredNode           12m                node-controller  Node ha-044175-m03 event: Registered Node ha-044175-m03 in Controller
	  Normal   RegisteredNode           11m                node-controller  Node ha-044175-m03 event: Registered Node ha-044175-m03 in Controller
	  Normal   RegisteredNode           2m28s              node-controller  Node ha-044175-m03 event: Registered Node ha-044175-m03 in Controller
	  Normal   RegisteredNode           2m16s              node-controller  Node ha-044175-m03 event: Registered Node ha-044175-m03 in Controller
	  Normal   NodeNotReady             108s               node-controller  Node ha-044175-m03 status is now: NodeNotReady
	  Normal   Starting                 59s                kubelet          Starting kubelet.
	  Normal   NodeAllocatableEnforced  59s                kubelet          Updated Node Allocatable limit across pods
	  Normal   NodeHasSufficientMemory  59s (x2 over 59s)  kubelet          Node ha-044175-m03 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    59s (x2 over 59s)  kubelet          Node ha-044175-m03 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     59s (x2 over 59s)  kubelet          Node ha-044175-m03 status is now: NodeHasSufficientPID
	  Warning  Rebooted                 59s                kubelet          Node ha-044175-m03 has been rebooted, boot id: 5746a7be-82a5-4be1-9fd5-660d3c4d6c2f
	  Normal   NodeReady                59s                kubelet          Node ha-044175-m03 status is now: NodeReady
	  Normal   RegisteredNode           32s                node-controller  Node ha-044175-m03 event: Registered Node ha-044175-m03 in Controller
	
	
	Name:               ha-044175-m04
	Roles:              <none>
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-044175-m04
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=a179f531dd2dbe55e0d6074abcbc378280f91bb4
	                    minikube.k8s.io/name=ha-044175
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_08_05T23_14_14_0700
	                    minikube.k8s.io/version=v1.33.1
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 05 Aug 2024 23:14:13 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-044175-m04
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 05 Aug 2024 23:25:08 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 05 Aug 2024 23:25:09 +0000   Mon, 05 Aug 2024 23:25:09 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 05 Aug 2024 23:25:09 +0000   Mon, 05 Aug 2024 23:25:09 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 05 Aug 2024 23:25:09 +0000   Mon, 05 Aug 2024 23:25:09 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 05 Aug 2024 23:25:09 +0000   Mon, 05 Aug 2024 23:25:09 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.228
	  Hostname:    ha-044175-m04
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 0d2536a5615e49c8bf2cb4a8d6f85b2f
	  System UUID:                0d2536a5-615e-49c8-bf2c-b4a8d6f85b2f
	  Boot ID:                    d840370c-d54f-402e-9d08-4c3e0708b35d
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.30.3
	  Kube-Proxy Version:         v1.30.3
	PodCIDR:                      10.244.3.0/24
	PodCIDRs:                     10.244.3.0/24
	Non-terminated Pods:          (2 in total)
	  Namespace                   Name                CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                ------------  ----------  ---------------  -------------  ---
	  kube-system                 kindnet-2rpdm       100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (2%!)(MISSING)        50Mi (2%!)(MISSING)      11m
	  kube-system                 kube-proxy-r5567    0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         11m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests   Limits
	  --------           --------   ------
	  cpu                100m (5%!)(MISSING)  100m (5%!)(MISSING)
	  memory             50Mi (2%!)(MISSING)  50Mi (2%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)     0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)     0 (0%!)(MISSING)
	Events:
	  Type     Reason                   Age                From             Message
	  ----     ------                   ----               ----             -------
	  Normal   Starting                 4s                 kube-proxy       
	  Normal   Starting                 10m                kube-proxy       
	  Normal   NodeHasSufficientMemory  11m (x2 over 11m)  kubelet          Node ha-044175-m04 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    11m (x2 over 11m)  kubelet          Node ha-044175-m04 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     11m (x2 over 11m)  kubelet          Node ha-044175-m04 status is now: NodeHasSufficientPID
	  Normal   NodeAllocatableEnforced  11m                kubelet          Updated Node Allocatable limit across pods
	  Normal   RegisteredNode           11m                node-controller  Node ha-044175-m04 event: Registered Node ha-044175-m04 in Controller
	  Normal   RegisteredNode           11m                node-controller  Node ha-044175-m04 event: Registered Node ha-044175-m04 in Controller
	  Normal   RegisteredNode           11m                node-controller  Node ha-044175-m04 event: Registered Node ha-044175-m04 in Controller
	  Normal   NodeReady                10m                kubelet          Node ha-044175-m04 status is now: NodeReady
	  Normal   RegisteredNode           2m28s              node-controller  Node ha-044175-m04 event: Registered Node ha-044175-m04 in Controller
	  Normal   RegisteredNode           2m16s              node-controller  Node ha-044175-m04 event: Registered Node ha-044175-m04 in Controller
	  Normal   NodeNotReady             108s               node-controller  Node ha-044175-m04 status is now: NodeNotReady
	  Normal   RegisteredNode           32s                node-controller  Node ha-044175-m04 event: Registered Node ha-044175-m04 in Controller
	  Normal   Starting                 9s                 kubelet          Starting kubelet.
	  Normal   NodeAllocatableEnforced  9s                 kubelet          Updated Node Allocatable limit across pods
	  Normal   NodeHasSufficientMemory  8s (x3 over 8s)    kubelet          Node ha-044175-m04 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    8s (x3 over 8s)    kubelet          Node ha-044175-m04 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     8s (x3 over 8s)    kubelet          Node ha-044175-m04 status is now: NodeHasSufficientPID
	  Warning  Rebooted                 8s (x2 over 8s)    kubelet          Node ha-044175-m04 has been rebooted, boot id: d840370c-d54f-402e-9d08-4c3e0708b35d
	  Normal   NodeReady                8s (x2 over 8s)    kubelet          Node ha-044175-m04 status is now: NodeReady
	
	
	==> dmesg <==
	[  +0.066481] systemd-fstab-generator[613]: Ignoring "noauto" option for root device
	[  +0.165121] systemd-fstab-generator[627]: Ignoring "noauto" option for root device
	[  +0.129651] systemd-fstab-generator[639]: Ignoring "noauto" option for root device
	[  +0.275605] systemd-fstab-generator[668]: Ignoring "noauto" option for root device
	[  +4.344469] systemd-fstab-generator[777]: Ignoring "noauto" option for root device
	[  +0.058179] kauditd_printk_skb: 130 callbacks suppressed
	[  +4.730128] systemd-fstab-generator[959]: Ignoring "noauto" option for root device
	[  +0.903161] kauditd_printk_skb: 46 callbacks suppressed
	[  +6.792303] systemd-fstab-generator[1383]: Ignoring "noauto" option for root device
	[  +0.087803] kauditd_printk_skb: 51 callbacks suppressed
	[ +13.188886] kauditd_printk_skb: 21 callbacks suppressed
	[Aug 5 23:11] kauditd_printk_skb: 35 callbacks suppressed
	[ +53.752834] kauditd_printk_skb: 24 callbacks suppressed
	[Aug 5 23:18] kauditd_printk_skb: 1 callbacks suppressed
	[Aug 5 23:21] systemd-fstab-generator[3834]: Ignoring "noauto" option for root device
	[  +0.151240] systemd-fstab-generator[3846]: Ignoring "noauto" option for root device
	[  +0.184946] systemd-fstab-generator[3860]: Ignoring "noauto" option for root device
	[  +0.146188] systemd-fstab-generator[3872]: Ignoring "noauto" option for root device
	[  +0.302569] systemd-fstab-generator[3900]: Ignoring "noauto" option for root device
	[  +9.901517] systemd-fstab-generator[4031]: Ignoring "noauto" option for root device
	[  +0.087787] kauditd_printk_skb: 110 callbacks suppressed
	[Aug 5 23:22] kauditd_printk_skb: 12 callbacks suppressed
	[ +12.414953] kauditd_printk_skb: 86 callbacks suppressed
	[ +10.059420] kauditd_printk_skb: 1 callbacks suppressed
	[ +25.020068] kauditd_printk_skb: 3 callbacks suppressed
	
	
	==> etcd [b0893967672c7dc591bbcf220e56601b8a46fc11f07e63adbadaddec59ec1803] <==
	2024/08/05 23:20:12 WARNING: [core] [Server #8] grpc: Server.processUnaryRPC failed to write status: connection error: desc = "transport is closing"
	2024/08/05 23:20:12 WARNING: [core] [Server #8] grpc: Server.processUnaryRPC failed to write status: connection error: desc = "transport is closing"
	2024/08/05 23:20:12 WARNING: [core] [Server #8] grpc: Server.processUnaryRPC failed to write status: connection error: desc = "transport is closing"
	2024/08/05 23:20:12 WARNING: [core] [Server #8] grpc: Server.processUnaryRPC failed to write status: connection error: desc = "transport is closing"
	{"level":"warn","ts":"2024-08-05T23:20:12.320974Z","caller":"etcdserver/v3_server.go:897","msg":"waiting for ReadIndex response took too long, retrying","sent-request-id":17815555288227144781,"retry-timeout":"500ms"}
	{"level":"warn","ts":"2024-08-05T23:20:12.360586Z","caller":"embed/serve.go:212","msg":"stopping secure grpc server due to error","error":"accept tcp 192.168.39.57:2379: use of closed network connection"}
	{"level":"warn","ts":"2024-08-05T23:20:12.360628Z","caller":"embed/serve.go:214","msg":"stopped secure grpc server due to error","error":"accept tcp 192.168.39.57:2379: use of closed network connection"}
	{"level":"info","ts":"2024-08-05T23:20:12.360684Z","caller":"etcdserver/server.go:1462","msg":"skipped leadership transfer; local server is not leader","local-member-id":"79ee2fa200dbf73d","current-leader-member-id":"0"}
	{"level":"info","ts":"2024-08-05T23:20:12.360878Z","caller":"rafthttp/peer.go:330","msg":"stopping remote peer","remote-peer-id":"74b01d9147cbb35"}
	{"level":"info","ts":"2024-08-05T23:20:12.360916Z","caller":"rafthttp/stream.go:294","msg":"stopped TCP streaming connection with remote peer","stream-writer-type":"stream MsgApp v2","remote-peer-id":"74b01d9147cbb35"}
	{"level":"info","ts":"2024-08-05T23:20:12.360962Z","caller":"rafthttp/stream.go:294","msg":"stopped TCP streaming connection with remote peer","stream-writer-type":"stream Message","remote-peer-id":"74b01d9147cbb35"}
	{"level":"info","ts":"2024-08-05T23:20:12.361042Z","caller":"rafthttp/pipeline.go:85","msg":"stopped HTTP pipelining with remote peer","local-member-id":"79ee2fa200dbf73d","remote-peer-id":"74b01d9147cbb35"}
	{"level":"info","ts":"2024-08-05T23:20:12.361093Z","caller":"rafthttp/stream.go:442","msg":"stopped stream reader with remote peer","stream-reader-type":"stream MsgApp v2","local-member-id":"79ee2fa200dbf73d","remote-peer-id":"74b01d9147cbb35"}
	{"level":"info","ts":"2024-08-05T23:20:12.361128Z","caller":"rafthttp/stream.go:442","msg":"stopped stream reader with remote peer","stream-reader-type":"stream Message","local-member-id":"79ee2fa200dbf73d","remote-peer-id":"74b01d9147cbb35"}
	{"level":"info","ts":"2024-08-05T23:20:12.361138Z","caller":"rafthttp/peer.go:335","msg":"stopped remote peer","remote-peer-id":"74b01d9147cbb35"}
	{"level":"info","ts":"2024-08-05T23:20:12.361143Z","caller":"rafthttp/peer.go:330","msg":"stopping remote peer","remote-peer-id":"64e36570e84f18f4"}
	{"level":"info","ts":"2024-08-05T23:20:12.361151Z","caller":"rafthttp/stream.go:294","msg":"stopped TCP streaming connection with remote peer","stream-writer-type":"stream MsgApp v2","remote-peer-id":"64e36570e84f18f4"}
	{"level":"info","ts":"2024-08-05T23:20:12.36117Z","caller":"rafthttp/stream.go:294","msg":"stopped TCP streaming connection with remote peer","stream-writer-type":"stream Message","remote-peer-id":"64e36570e84f18f4"}
	{"level":"info","ts":"2024-08-05T23:20:12.36126Z","caller":"rafthttp/pipeline.go:85","msg":"stopped HTTP pipelining with remote peer","local-member-id":"79ee2fa200dbf73d","remote-peer-id":"64e36570e84f18f4"}
	{"level":"info","ts":"2024-08-05T23:20:12.361303Z","caller":"rafthttp/stream.go:442","msg":"stopped stream reader with remote peer","stream-reader-type":"stream MsgApp v2","local-member-id":"79ee2fa200dbf73d","remote-peer-id":"64e36570e84f18f4"}
	{"level":"info","ts":"2024-08-05T23:20:12.361545Z","caller":"rafthttp/stream.go:442","msg":"stopped stream reader with remote peer","stream-reader-type":"stream Message","local-member-id":"79ee2fa200dbf73d","remote-peer-id":"64e36570e84f18f4"}
	{"level":"info","ts":"2024-08-05T23:20:12.361588Z","caller":"rafthttp/peer.go:335","msg":"stopped remote peer","remote-peer-id":"64e36570e84f18f4"}
	{"level":"info","ts":"2024-08-05T23:20:12.365179Z","caller":"embed/etcd.go:579","msg":"stopping serving peer traffic","address":"192.168.39.57:2380"}
	{"level":"info","ts":"2024-08-05T23:20:12.365321Z","caller":"embed/etcd.go:584","msg":"stopped serving peer traffic","address":"192.168.39.57:2380"}
	{"level":"info","ts":"2024-08-05T23:20:12.365347Z","caller":"embed/etcd.go:377","msg":"closed etcd server","name":"ha-044175","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.39.57:2380"],"advertise-client-urls":["https://192.168.39.57:2379"]}
	
	
	==> etcd [da62836e55aaaf8eee39a34113a3d41ba6489986d26134bed80020f8c7164507] <==
	{"level":"warn","ts":"2024-08-05T23:24:13.352462Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"79ee2fa200dbf73d","from":"79ee2fa200dbf73d","remote-peer-id":"64e36570e84f18f4","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-08-05T23:24:13.452448Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"79ee2fa200dbf73d","from":"79ee2fa200dbf73d","remote-peer-id":"64e36570e84f18f4","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-08-05T23:24:13.499998Z","caller":"etcdserver/cluster_util.go:294","msg":"failed to reach the peer URL","address":"https://192.168.39.201:2380/version","remote-member-id":"64e36570e84f18f4","error":"Get \"https://192.168.39.201:2380/version\": dial tcp 192.168.39.201:2380: connect: connection refused"}
	{"level":"warn","ts":"2024-08-05T23:24:13.500112Z","caller":"etcdserver/cluster_util.go:158","msg":"failed to get version","remote-member-id":"64e36570e84f18f4","error":"Get \"https://192.168.39.201:2380/version\": dial tcp 192.168.39.201:2380: connect: connection refused"}
	{"level":"warn","ts":"2024-08-05T23:24:13.987796Z","caller":"rafthttp/probing_status.go:68","msg":"prober detected unhealthy status","round-tripper-name":"ROUND_TRIPPER_RAFT_MESSAGE","remote-peer-id":"64e36570e84f18f4","rtt":"0s","error":"dial tcp 192.168.39.201:2380: connect: no route to host"}
	{"level":"warn","ts":"2024-08-05T23:24:13.987888Z","caller":"rafthttp/probing_status.go:68","msg":"prober detected unhealthy status","round-tripper-name":"ROUND_TRIPPER_SNAPSHOT","remote-peer-id":"64e36570e84f18f4","rtt":"0s","error":"dial tcp 192.168.39.201:2380: connect: no route to host"}
	{"level":"warn","ts":"2024-08-05T23:24:17.502884Z","caller":"etcdserver/cluster_util.go:294","msg":"failed to reach the peer URL","address":"https://192.168.39.201:2380/version","remote-member-id":"64e36570e84f18f4","error":"Get \"https://192.168.39.201:2380/version\": dial tcp 192.168.39.201:2380: connect: connection refused"}
	{"level":"warn","ts":"2024-08-05T23:24:17.502953Z","caller":"etcdserver/cluster_util.go:158","msg":"failed to get version","remote-member-id":"64e36570e84f18f4","error":"Get \"https://192.168.39.201:2380/version\": dial tcp 192.168.39.201:2380: connect: connection refused"}
	{"level":"warn","ts":"2024-08-05T23:24:18.988233Z","caller":"rafthttp/probing_status.go:68","msg":"prober detected unhealthy status","round-tripper-name":"ROUND_TRIPPER_SNAPSHOT","remote-peer-id":"64e36570e84f18f4","rtt":"0s","error":"dial tcp 192.168.39.201:2380: connect: connection refused"}
	{"level":"warn","ts":"2024-08-05T23:24:18.98832Z","caller":"rafthttp/probing_status.go:68","msg":"prober detected unhealthy status","round-tripper-name":"ROUND_TRIPPER_RAFT_MESSAGE","remote-peer-id":"64e36570e84f18f4","rtt":"0s","error":"dial tcp 192.168.39.201:2380: connect: connection refused"}
	{"level":"warn","ts":"2024-08-05T23:24:21.504489Z","caller":"etcdserver/cluster_util.go:294","msg":"failed to reach the peer URL","address":"https://192.168.39.201:2380/version","remote-member-id":"64e36570e84f18f4","error":"Get \"https://192.168.39.201:2380/version\": dial tcp 192.168.39.201:2380: connect: connection refused"}
	{"level":"warn","ts":"2024-08-05T23:24:21.504639Z","caller":"etcdserver/cluster_util.go:158","msg":"failed to get version","remote-member-id":"64e36570e84f18f4","error":"Get \"https://192.168.39.201:2380/version\": dial tcp 192.168.39.201:2380: connect: connection refused"}
	{"level":"warn","ts":"2024-08-05T23:24:21.842927Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"115.930772ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/health\" ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-08-05T23:24:21.843166Z","caller":"traceutil/trace.go:171","msg":"trace[354114000] range","detail":"{range_begin:/registry/health; range_end:; response_count:0; response_revision:2475; }","duration":"116.225132ms","start":"2024-08-05T23:24:21.726884Z","end":"2024-08-05T23:24:21.84311Z","steps":["trace[354114000] 'range keys from in-memory index tree'  (duration: 114.960987ms)"],"step_count":1}
	{"level":"warn","ts":"2024-08-05T23:24:23.988969Z","caller":"rafthttp/probing_status.go:68","msg":"prober detected unhealthy status","round-tripper-name":"ROUND_TRIPPER_RAFT_MESSAGE","remote-peer-id":"64e36570e84f18f4","rtt":"0s","error":"dial tcp 192.168.39.201:2380: connect: connection refused"}
	{"level":"warn","ts":"2024-08-05T23:24:23.989107Z","caller":"rafthttp/probing_status.go:68","msg":"prober detected unhealthy status","round-tripper-name":"ROUND_TRIPPER_SNAPSHOT","remote-peer-id":"64e36570e84f18f4","rtt":"0s","error":"dial tcp 192.168.39.201:2380: connect: connection refused"}
	{"level":"warn","ts":"2024-08-05T23:24:25.506546Z","caller":"etcdserver/cluster_util.go:294","msg":"failed to reach the peer URL","address":"https://192.168.39.201:2380/version","remote-member-id":"64e36570e84f18f4","error":"Get \"https://192.168.39.201:2380/version\": dial tcp 192.168.39.201:2380: connect: connection refused"}
	{"level":"warn","ts":"2024-08-05T23:24:25.506695Z","caller":"etcdserver/cluster_util.go:158","msg":"failed to get version","remote-member-id":"64e36570e84f18f4","error":"Get \"https://192.168.39.201:2380/version\": dial tcp 192.168.39.201:2380: connect: connection refused"}
	{"level":"info","ts":"2024-08-05T23:24:26.280998Z","caller":"rafthttp/peer_status.go:53","msg":"peer became active","peer-id":"64e36570e84f18f4"}
	{"level":"info","ts":"2024-08-05T23:24:26.281184Z","caller":"rafthttp/stream.go:412","msg":"established TCP streaming connection with remote peer","stream-reader-type":"stream MsgApp v2","local-member-id":"79ee2fa200dbf73d","remote-peer-id":"64e36570e84f18f4"}
	{"level":"info","ts":"2024-08-05T23:24:26.281361Z","caller":"rafthttp/stream.go:412","msg":"established TCP streaming connection with remote peer","stream-reader-type":"stream Message","local-member-id":"79ee2fa200dbf73d","remote-peer-id":"64e36570e84f18f4"}
	{"level":"info","ts":"2024-08-05T23:24:26.30014Z","caller":"rafthttp/stream.go:249","msg":"set message encoder","from":"79ee2fa200dbf73d","to":"64e36570e84f18f4","stream-type":"stream Message"}
	{"level":"info","ts":"2024-08-05T23:24:26.300249Z","caller":"rafthttp/stream.go:274","msg":"established TCP streaming connection with remote peer","stream-writer-type":"stream Message","local-member-id":"79ee2fa200dbf73d","remote-peer-id":"64e36570e84f18f4"}
	{"level":"info","ts":"2024-08-05T23:24:26.373123Z","caller":"rafthttp/stream.go:249","msg":"set message encoder","from":"79ee2fa200dbf73d","to":"64e36570e84f18f4","stream-type":"stream MsgApp v2"}
	{"level":"info","ts":"2024-08-05T23:24:26.373191Z","caller":"rafthttp/stream.go:274","msg":"established TCP streaming connection with remote peer","stream-writer-type":"stream MsgApp v2","local-member-id":"79ee2fa200dbf73d","remote-peer-id":"64e36570e84f18f4"}
	
	
	==> kernel <==
	 23:25:17 up 15 min,  0 users,  load average: 0.63, 0.57, 0.34
	Linux ha-044175 5.10.207 #1 SMP Mon Jul 29 15:19:02 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kindnet [97768d7c5371dd0c06071b82c8baadd28ee604281812facf0dbd4a723ea92274] <==
	I0805 23:24:44.215904       1 main.go:322] Node ha-044175-m04 has CIDR [10.244.3.0/24] 
	I0805 23:24:54.214146       1 main.go:295] Handling node with IPs: map[192.168.39.57:{}]
	I0805 23:24:54.214208       1 main.go:299] handling current node
	I0805 23:24:54.214231       1 main.go:295] Handling node with IPs: map[192.168.39.112:{}]
	I0805 23:24:54.214239       1 main.go:322] Node ha-044175-m02 has CIDR [10.244.1.0/24] 
	I0805 23:24:54.214616       1 main.go:295] Handling node with IPs: map[192.168.39.201:{}]
	I0805 23:24:54.214656       1 main.go:322] Node ha-044175-m03 has CIDR [10.244.2.0/24] 
	I0805 23:24:54.214745       1 main.go:295] Handling node with IPs: map[192.168.39.228:{}]
	I0805 23:24:54.214776       1 main.go:322] Node ha-044175-m04 has CIDR [10.244.3.0/24] 
	I0805 23:25:04.209944       1 main.go:295] Handling node with IPs: map[192.168.39.57:{}]
	I0805 23:25:04.210169       1 main.go:299] handling current node
	I0805 23:25:04.210244       1 main.go:295] Handling node with IPs: map[192.168.39.112:{}]
	I0805 23:25:04.210280       1 main.go:322] Node ha-044175-m02 has CIDR [10.244.1.0/24] 
	I0805 23:25:04.210612       1 main.go:295] Handling node with IPs: map[192.168.39.201:{}]
	I0805 23:25:04.210753       1 main.go:322] Node ha-044175-m03 has CIDR [10.244.2.0/24] 
	I0805 23:25:04.210899       1 main.go:295] Handling node with IPs: map[192.168.39.228:{}]
	I0805 23:25:04.210935       1 main.go:322] Node ha-044175-m04 has CIDR [10.244.3.0/24] 
	I0805 23:25:14.213423       1 main.go:295] Handling node with IPs: map[192.168.39.57:{}]
	I0805 23:25:14.213988       1 main.go:299] handling current node
	I0805 23:25:14.214101       1 main.go:295] Handling node with IPs: map[192.168.39.112:{}]
	I0805 23:25:14.214229       1 main.go:322] Node ha-044175-m02 has CIDR [10.244.1.0/24] 
	I0805 23:25:14.214653       1 main.go:295] Handling node with IPs: map[192.168.39.201:{}]
	I0805 23:25:14.214720       1 main.go:322] Node ha-044175-m03 has CIDR [10.244.2.0/24] 
	I0805 23:25:14.214821       1 main.go:295] Handling node with IPs: map[192.168.39.228:{}]
	I0805 23:25:14.214849       1 main.go:322] Node ha-044175-m04 has CIDR [10.244.3.0/24] 
	
	
	==> kindnet [97fa319bea82614cab7525f9052bcc8a09fad765b260045dbf0d0fa0ca0290b2] <==
	I0805 23:19:32.766825       1 main.go:322] Node ha-044175-m02 has CIDR [10.244.1.0/24] 
	I0805 23:19:42.758028       1 main.go:295] Handling node with IPs: map[192.168.39.57:{}]
	I0805 23:19:42.758135       1 main.go:299] handling current node
	I0805 23:19:42.758162       1 main.go:295] Handling node with IPs: map[192.168.39.112:{}]
	I0805 23:19:42.758181       1 main.go:322] Node ha-044175-m02 has CIDR [10.244.1.0/24] 
	I0805 23:19:42.758324       1 main.go:295] Handling node with IPs: map[192.168.39.201:{}]
	I0805 23:19:42.758344       1 main.go:322] Node ha-044175-m03 has CIDR [10.244.2.0/24] 
	I0805 23:19:42.758482       1 main.go:295] Handling node with IPs: map[192.168.39.228:{}]
	I0805 23:19:42.758508       1 main.go:322] Node ha-044175-m04 has CIDR [10.244.3.0/24] 
	I0805 23:19:52.757433       1 main.go:295] Handling node with IPs: map[192.168.39.57:{}]
	I0805 23:19:52.757482       1 main.go:299] handling current node
	I0805 23:19:52.757504       1 main.go:295] Handling node with IPs: map[192.168.39.112:{}]
	I0805 23:19:52.757509       1 main.go:322] Node ha-044175-m02 has CIDR [10.244.1.0/24] 
	I0805 23:19:52.757630       1 main.go:295] Handling node with IPs: map[192.168.39.201:{}]
	I0805 23:19:52.757675       1 main.go:322] Node ha-044175-m03 has CIDR [10.244.2.0/24] 
	I0805 23:19:52.757731       1 main.go:295] Handling node with IPs: map[192.168.39.228:{}]
	I0805 23:19:52.757753       1 main.go:322] Node ha-044175-m04 has CIDR [10.244.3.0/24] 
	I0805 23:20:02.757577       1 main.go:295] Handling node with IPs: map[192.168.39.228:{}]
	I0805 23:20:02.757603       1 main.go:322] Node ha-044175-m04 has CIDR [10.244.3.0/24] 
	I0805 23:20:02.757761       1 main.go:295] Handling node with IPs: map[192.168.39.57:{}]
	I0805 23:20:02.757788       1 main.go:299] handling current node
	I0805 23:20:02.757800       1 main.go:295] Handling node with IPs: map[192.168.39.112:{}]
	I0805 23:20:02.757805       1 main.go:322] Node ha-044175-m02 has CIDR [10.244.1.0/24] 
	I0805 23:20:02.757858       1 main.go:295] Handling node with IPs: map[192.168.39.201:{}]
	I0805 23:20:02.757879       1 main.go:322] Node ha-044175-m03 has CIDR [10.244.2.0/24] 
	
	
	==> kube-apiserver [7cf9b7cb63859c9cfe968fc20b9dacecfc681905714bc14a19a78ba20314f787] <==
	I0805 23:22:44.577689       1 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/var/lib/minikube/certs/ca.crt"
	I0805 23:22:44.577987       1 dynamic_cafile_content.go:157] "Starting controller" name="request-header::/var/lib/minikube/certs/front-proxy-ca.crt"
	I0805 23:22:44.663216       1 shared_informer.go:320] Caches are synced for crd-autoregister
	I0805 23:22:44.667234       1 aggregator.go:165] initial CRD sync complete...
	I0805 23:22:44.667341       1 autoregister_controller.go:141] Starting autoregister controller
	I0805 23:22:44.667411       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I0805 23:22:44.693723       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I0805 23:22:44.701661       1 cache.go:39] Caches are synced for AvailableConditionController controller
	I0805 23:22:44.705836       1 handler_discovery.go:447] Starting ResourceDiscoveryManager
	I0805 23:22:44.720285       1 shared_informer.go:320] Caches are synced for node_authorizer
	I0805 23:22:44.730831       1 shared_informer.go:320] Caches are synced for *generic.policySource[*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicy,*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicyBinding,k8s.io/apiserver/pkg/admission/plugin/policy/validating.Validator]
	I0805 23:22:44.730866       1 policy_source.go:224] refreshing policies
	W0805 23:22:44.742162       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.168.39.112 192.168.39.201]
	I0805 23:22:44.744783       1 controller.go:615] quota admission added evaluator for: endpoints
	I0805 23:22:44.761335       1 shared_informer.go:320] Caches are synced for cluster_authentication_trust_controller
	I0805 23:22:44.761961       1 shared_informer.go:320] Caches are synced for configmaps
	I0805 23:22:44.762257       1 apf_controller.go:379] Running API Priority and Fairness config worker
	I0805 23:22:44.763051       1 apf_controller.go:382] Running API Priority and Fairness periodic rebalancing process
	I0805 23:22:44.769913       1 controller.go:615] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I0805 23:22:44.771243       1 cache.go:39] Caches are synced for autoregister controller
	E0805 23:22:44.780052       1 controller.go:95] Found stale data, removed previous endpoints on kubernetes service, apiserver didn't exit successfully previously
	I0805 23:22:44.801491       1 controller.go:615] quota admission added evaluator for: leases.coordination.k8s.io
	I0805 23:22:45.576245       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	W0805 23:22:46.037358       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.168.39.112 192.168.39.201 192.168.39.57]
	W0805 23:22:56.035495       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.168.39.112 192.168.39.57]
	
	
	==> kube-apiserver [95d6da5b264d99c2ae66291b9df0943d6f8ac4b1743a5bef2caebaaa9fa1694c] <==
	I0805 23:22:03.308960       1 options.go:221] external host was not specified, using 192.168.39.57
	I0805 23:22:03.312215       1 server.go:148] Version: v1.30.3
	I0805 23:22:03.312262       1 server.go:150] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0805 23:22:04.226588       1 shared_informer.go:313] Waiting for caches to sync for node_authorizer
	I0805 23:22:04.237351       1 plugins.go:157] Loaded 12 mutating admission controller(s) successfully in the following order: NamespaceLifecycle,LimitRanger,ServiceAccount,NodeRestriction,TaintNodesByCondition,Priority,DefaultTolerationSeconds,DefaultStorageClass,StorageObjectInUseProtection,RuntimeClass,DefaultIngressClass,MutatingAdmissionWebhook.
	I0805 23:22:04.237527       1 plugins.go:160] Loaded 13 validating admission controller(s) successfully in the following order: LimitRanger,ServiceAccount,PodSecurity,Priority,PersistentVolumeClaimResize,RuntimeClass,CertificateApproval,CertificateSigning,ClusterTrustBundleAttest,CertificateSubjectRestriction,ValidatingAdmissionPolicy,ValidatingAdmissionWebhook,ResourceQuota.
	I0805 23:22:04.237764       1 instance.go:299] Using reconciler: lease
	I0805 23:22:04.239078       1 shared_informer.go:313] Waiting for caches to sync for *generic.policySource[*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicy,*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicyBinding,k8s.io/apiserver/pkg/admission/plugin/policy/validating.Validator]
	W0805 23:22:24.224714       1 logging.go:59] [core] [Channel #1 SubChannel #2] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: authentication handshake failed: context canceled"
	W0805 23:22:24.225529       1 logging.go:59] [core] [Channel #3 SubChannel #4] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: authentication handshake failed: context deadline exceeded"
	F0805 23:22:24.239123       1 instance.go:292] Error creating leases: error creating storage factory: context deadline exceeded
	
	
	==> kube-controller-manager [6528b925e75da18994cd673a201712eb241eeff865202c130034f40f0a350bb8] <==
	I0805 23:23:01.886244       1 shared_informer.go:320] Caches are synced for legacy-service-account-token-cleaner
	I0805 23:23:01.886276       1 shared_informer.go:320] Caches are synced for ephemeral
	I0805 23:23:02.031692       1 shared_informer.go:320] Caches are synced for disruption
	I0805 23:23:02.031827       1 shared_informer.go:320] Caches are synced for ReplicationController
	I0805 23:23:02.034504       1 shared_informer.go:320] Caches are synced for endpoint_slice_mirroring
	I0805 23:23:02.037652       1 shared_informer.go:320] Caches are synced for resource quota
	I0805 23:23:02.042119       1 shared_informer.go:320] Caches are synced for resource quota
	I0805 23:23:02.082162       1 shared_informer.go:320] Caches are synced for endpoint
	I0805 23:23:02.444182       1 shared_informer.go:320] Caches are synced for garbage collector
	I0805 23:23:02.444294       1 garbagecollector.go:157] "All resource monitors have synced. Proceeding to collect garbage" logger="garbage-collector-controller"
	I0805 23:23:02.493712       1 shared_informer.go:320] Caches are synced for garbage collector
	I0805 23:23:13.641262       1 endpointslice_controller.go:311] "Error syncing endpoint slices for service, retrying" logger="endpointslice-controller" key="kube-system/kube-dns" err="failed to update kube-dns-4qwjf EndpointSlice for Service kube-system/kube-dns: Operation cannot be fulfilled on endpointslices.discovery.k8s.io \"kube-dns-4qwjf\": the object has been modified; please apply your changes to the latest version and try again"
	I0805 23:23:13.641514       1 event.go:377] Event(v1.ObjectReference{Kind:"Service", Namespace:"kube-system", Name:"kube-dns", UID:"0247b6c4-5472-44c2-ad5e-6c5ed7ff58c9", APIVersion:"v1", ResourceVersion:"246", FieldPath:""}): type: 'Warning' reason: 'FailedToUpdateEndpointSlices' Error updating Endpoint Slices for Service kube-system/kube-dns: failed to update kube-dns-4qwjf EndpointSlice for Service kube-system/kube-dns: Operation cannot be fulfilled on endpointslices.discovery.k8s.io "kube-dns-4qwjf": the object has been modified; please apply your changes to the latest version and try again
	I0805 23:23:13.676225       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="73.20641ms"
	I0805 23:23:13.676567       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="123.966µs"
	I0805 23:23:29.821195       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="14.901747ms"
	I0805 23:23:29.822100       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="34.178µs"
	I0805 23:23:43.639959       1 endpointslice_controller.go:311] "Error syncing endpoint slices for service, retrying" logger="endpointslice-controller" key="kube-system/kube-dns" err="failed to update kube-dns-4qwjf EndpointSlice for Service kube-system/kube-dns: Operation cannot be fulfilled on endpointslices.discovery.k8s.io \"kube-dns-4qwjf\": the object has been modified; please apply your changes to the latest version and try again"
	I0805 23:23:43.640652       1 event.go:377] Event(v1.ObjectReference{Kind:"Service", Namespace:"kube-system", Name:"kube-dns", UID:"0247b6c4-5472-44c2-ad5e-6c5ed7ff58c9", APIVersion:"v1", ResourceVersion:"246", FieldPath:""}): type: 'Warning' reason: 'FailedToUpdateEndpointSlices' Error updating Endpoint Slices for Service kube-system/kube-dns: failed to update kube-dns-4qwjf EndpointSlice for Service kube-system/kube-dns: Operation cannot be fulfilled on endpointslices.discovery.k8s.io "kube-dns-4qwjf": the object has been modified; please apply your changes to the latest version and try again
	I0805 23:23:43.675668       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="65.812844ms"
	I0805 23:23:43.675828       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="111.217µs"
	I0805 23:24:19.289761       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="39.788µs"
	I0805 23:24:38.619439       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="11.019085ms"
	I0805 23:24:38.619556       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="46.623µs"
	I0805 23:25:09.078335       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="ha-044175-m04"
	
	
	==> kube-controller-manager [dd436770dad332628ad6a3b7fea663d52dda62901d07f6c1bfa5cf82ddae4f61] <==
	I0805 23:22:04.085143       1 serving.go:380] Generated self-signed cert in-memory
	I0805 23:22:04.507970       1 controllermanager.go:189] "Starting" version="v1.30.3"
	I0805 23:22:04.508020       1 controllermanager.go:191] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0805 23:22:04.512018       1 secure_serving.go:213] Serving securely on 127.0.0.1:10257
	I0805 23:22:04.512893       1 dynamic_cafile_content.go:157] "Starting controller" name="request-header::/var/lib/minikube/certs/front-proxy-ca.crt"
	I0805 23:22:04.513091       1 tlsconfig.go:240] "Starting DynamicServingCertificateController"
	I0805 23:22:04.513203       1 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/var/lib/minikube/certs/ca.crt"
	E0805 23:22:25.246316       1 controllermanager.go:234] "Error building controller context" err="failed to wait for apiserver being healthy: timed out waiting for the condition: failed to get apiserver /healthz status: Get \"https://192.168.39.57:8443/healthz\": dial tcp 192.168.39.57:8443: connect: connection refused"
	
	
	==> kube-proxy [04c382fd4a32fe8685a6f643ecf7a291e4d542c2223975f9df92991fe566b12a] <==
	E0805 23:18:53.771110       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://control-plane.minikube.internal:8443/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=1856": dial tcp 192.168.39.254:8443: connect: no route to host
	W0805 23:18:53.770796       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8443/api/v1/services?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=1900": dial tcp 192.168.39.254:8443: connect: no route to host
	E0805 23:18:53.771183       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8443/api/v1/services?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=1900": dial tcp 192.168.39.254:8443: connect: no route to host
	W0805 23:18:53.770856       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%!D(MISSING)ha-044175&resourceVersion=1814": dial tcp 192.168.39.254:8443: connect: no route to host
	E0805 23:18:53.771257       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%!D(MISSING)ha-044175&resourceVersion=1814": dial tcp 192.168.39.254:8443: connect: no route to host
	W0805 23:19:01.963118       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.EndpointSlice: Get "https://control-plane.minikube.internal:8443/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=1856": dial tcp 192.168.39.254:8443: connect: no route to host
	E0805 23:19:01.963775       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://control-plane.minikube.internal:8443/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=1856": dial tcp 192.168.39.254:8443: connect: no route to host
	W0805 23:19:01.963863       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8443/api/v1/services?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=1900": dial tcp 192.168.39.254:8443: connect: no route to host
	E0805 23:19:01.963914       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8443/api/v1/services?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=1900": dial tcp 192.168.39.254:8443: connect: no route to host
	W0805 23:19:01.963157       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%!D(MISSING)ha-044175&resourceVersion=1814": dial tcp 192.168.39.254:8443: connect: no route to host
	E0805 23:19:01.964096       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%!D(MISSING)ha-044175&resourceVersion=1814": dial tcp 192.168.39.254:8443: connect: no route to host
	W0805 23:19:09.708798       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.EndpointSlice: Get "https://control-plane.minikube.internal:8443/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=1856": dial tcp 192.168.39.254:8443: connect: no route to host
	E0805 23:19:09.709004       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://control-plane.minikube.internal:8443/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=1856": dial tcp 192.168.39.254:8443: connect: no route to host
	W0805 23:19:12.781970       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%!D(MISSING)ha-044175&resourceVersion=1814": dial tcp 192.168.39.254:8443: connect: no route to host
	E0805 23:19:12.782201       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%!D(MISSING)ha-044175&resourceVersion=1814": dial tcp 192.168.39.254:8443: connect: no route to host
	W0805 23:19:15.851252       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8443/api/v1/services?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=1900": dial tcp 192.168.39.254:8443: connect: no route to host
	E0805 23:19:15.851494       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8443/api/v1/services?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=1900": dial tcp 192.168.39.254:8443: connect: no route to host
	W0805 23:19:28.140691       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.EndpointSlice: Get "https://control-plane.minikube.internal:8443/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=1856": dial tcp 192.168.39.254:8443: connect: no route to host
	E0805 23:19:28.141005       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://control-plane.minikube.internal:8443/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=1856": dial tcp 192.168.39.254:8443: connect: no route to host
	W0805 23:19:34.283692       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%!D(MISSING)ha-044175&resourceVersion=1814": dial tcp 192.168.39.254:8443: connect: no route to host
	E0805 23:19:34.283815       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%!D(MISSING)ha-044175&resourceVersion=1814": dial tcp 192.168.39.254:8443: connect: no route to host
	W0805 23:19:40.427939       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8443/api/v1/services?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=1900": dial tcp 192.168.39.254:8443: connect: no route to host
	E0805 23:19:40.428007       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8443/api/v1/services?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=1900": dial tcp 192.168.39.254:8443: connect: no route to host
	W0805 23:20:05.003291       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.EndpointSlice: Get "https://control-plane.minikube.internal:8443/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=1856": dial tcp 192.168.39.254:8443: connect: no route to host
	E0805 23:20:05.003527       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://control-plane.minikube.internal:8443/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=1856": dial tcp 192.168.39.254:8443: connect: no route to host
	
	
	==> kube-proxy [5f43d5e7445c285e5783c937039be219df8aaea8c9db899259f8d24c895a378c] <==
	I0805 23:22:04.516359       1 server_linux.go:69] "Using iptables proxy"
	E0805 23:22:04.812454       1 server.go:1051] "Failed to retrieve node info" err="Get \"https://control-plane.minikube.internal:8443/api/v1/nodes/ha-044175\": dial tcp 192.168.39.254:8443: connect: no route to host"
	E0805 23:22:07.883973       1 server.go:1051] "Failed to retrieve node info" err="Get \"https://control-plane.minikube.internal:8443/api/v1/nodes/ha-044175\": dial tcp 192.168.39.254:8443: connect: no route to host"
	E0805 23:22:10.955807       1 server.go:1051] "Failed to retrieve node info" err="Get \"https://control-plane.minikube.internal:8443/api/v1/nodes/ha-044175\": dial tcp 192.168.39.254:8443: connect: no route to host"
	E0805 23:22:17.100004       1 server.go:1051] "Failed to retrieve node info" err="Get \"https://control-plane.minikube.internal:8443/api/v1/nodes/ha-044175\": dial tcp 192.168.39.254:8443: connect: no route to host"
	E0805 23:22:26.316453       1 server.go:1051] "Failed to retrieve node info" err="Get \"https://control-plane.minikube.internal:8443/api/v1/nodes/ha-044175\": dial tcp 192.168.39.254:8443: connect: no route to host"
	I0805 23:22:44.743833       1 server.go:1062] "Successfully retrieved node IP(s)" IPs=["192.168.39.57"]
	I0805 23:22:44.885801       1 server_linux.go:143] "No iptables support for family" ipFamily="IPv6"
	I0805 23:22:44.885876       1 server.go:661] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0805 23:22:44.885897       1 server_linux.go:165] "Using iptables Proxier"
	I0805 23:22:44.889896       1 proxier.go:243] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I0805 23:22:44.890153       1 server.go:872] "Version info" version="v1.30.3"
	I0805 23:22:44.890757       1 server.go:874] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0805 23:22:44.892946       1 config.go:192] "Starting service config controller"
	I0805 23:22:44.892989       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0805 23:22:44.893021       1 config.go:101] "Starting endpoint slice config controller"
	I0805 23:22:44.893025       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0805 23:22:44.893846       1 config.go:319] "Starting node config controller"
	I0805 23:22:44.893878       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0805 23:22:44.994170       1 shared_informer.go:320] Caches are synced for node config
	I0805 23:22:44.994235       1 shared_informer.go:320] Caches are synced for service config
	I0805 23:22:44.994299       1 shared_informer.go:320] Caches are synced for endpoint slice config
	
	
	==> kube-scheduler [2a85f2254a23cdec7e89ff8de2e31b06ddf2853808330965760217f1fd834004] <==
	E0805 23:20:06.326083       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	W0805 23:20:06.367923       1 reflector.go:547] runtime/asm_amd64.s:1695: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0805 23:20:06.368059       1 reflector.go:150] runtime/asm_amd64.s:1695: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	W0805 23:20:06.481469       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0805 23:20:06.481665       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	W0805 23:20:06.782289       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0805 23:20:06.782468       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	W0805 23:20:06.813116       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E0805 23:20:06.813259       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	W0805 23:20:06.961524       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E0805 23:20:06.961669       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	W0805 23:20:07.046991       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0805 23:20:07.047051       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	W0805 23:20:07.785781       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0805 23:20:07.785833       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	W0805 23:20:07.786992       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0805 23:20:07.787065       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	W0805 23:20:07.902160       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0805 23:20:07.902307       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	W0805 23:20:09.112724       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0805 23:20:09.112781       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	I0805 23:20:12.071715       1 secure_serving.go:258] Stopped listening on 127.0.0.1:10259
	I0805 23:20:12.071832       1 tlsconfig.go:255] "Shutting down DynamicServingCertificateController"
	I0805 23:20:12.071997       1 configmap_cafile_content.go:223] "Shutting down controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	E0805 23:20:12.072309       1 run.go:74] "command failed" err="finished without leader elect"
	
	
	==> kube-scheduler [5537b3a8dbcb27d26dc336a48652fdd3385ec0fb3b5169e72e472a665bc2e3ed] <==
	W0805 23:22:35.439302       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolume: Get "https://192.168.39.57:8443/api/v1/persistentvolumes?limit=500&resourceVersion=0": dial tcp 192.168.39.57:8443: connect: connection refused
	E0805 23:22:35.439361       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: Get "https://192.168.39.57:8443/api/v1/persistentvolumes?limit=500&resourceVersion=0": dial tcp 192.168.39.57:8443: connect: connection refused
	W0805 23:22:40.379355       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://192.168.39.57:8443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 192.168.39.57:8443: connect: connection refused
	E0805 23:22:40.379571       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://192.168.39.57:8443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 192.168.39.57:8443: connect: connection refused
	W0805 23:22:41.426333       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StorageClass: Get "https://192.168.39.57:8443/apis/storage.k8s.io/v1/storageclasses?limit=500&resourceVersion=0": dial tcp 192.168.39.57:8443: connect: connection refused
	E0805 23:22:41.426454       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: Get "https://192.168.39.57:8443/apis/storage.k8s.io/v1/storageclasses?limit=500&resourceVersion=0": dial tcp 192.168.39.57:8443: connect: connection refused
	W0805 23:22:41.704636       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StatefulSet: Get "https://192.168.39.57:8443/apis/apps/v1/statefulsets?limit=500&resourceVersion=0": dial tcp 192.168.39.57:8443: connect: connection refused
	E0805 23:22:41.704759       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: Get "https://192.168.39.57:8443/apis/apps/v1/statefulsets?limit=500&resourceVersion=0": dial tcp 192.168.39.57:8443: connect: connection refused
	W0805 23:22:41.934682       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolumeClaim: Get "https://192.168.39.57:8443/api/v1/persistentvolumeclaims?limit=500&resourceVersion=0": dial tcp 192.168.39.57:8443: connect: connection refused
	E0805 23:22:41.934794       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: Get "https://192.168.39.57:8443/api/v1/persistentvolumeclaims?limit=500&resourceVersion=0": dial tcp 192.168.39.57:8443: connect: connection refused
	W0805 23:22:42.102653       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSINode: Get "https://192.168.39.57:8443/apis/storage.k8s.io/v1/csinodes?limit=500&resourceVersion=0": dial tcp 192.168.39.57:8443: connect: connection refused
	E0805 23:22:42.102712       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSINode: failed to list *v1.CSINode: Get "https://192.168.39.57:8443/apis/storage.k8s.io/v1/csinodes?limit=500&resourceVersion=0": dial tcp 192.168.39.57:8443: connect: connection refused
	W0805 23:22:42.201654       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIStorageCapacity: Get "https://192.168.39.57:8443/apis/storage.k8s.io/v1/csistoragecapacities?limit=500&resourceVersion=0": dial tcp 192.168.39.57:8443: connect: connection refused
	E0805 23:22:42.201741       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: Get "https://192.168.39.57:8443/apis/storage.k8s.io/v1/csistoragecapacities?limit=500&resourceVersion=0": dial tcp 192.168.39.57:8443: connect: connection refused
	W0805 23:22:42.264517       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolume: Get "https://192.168.39.57:8443/api/v1/persistentvolumes?limit=500&resourceVersion=0": dial tcp 192.168.39.57:8443: connect: connection refused
	E0805 23:22:42.264621       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: Get "https://192.168.39.57:8443/api/v1/persistentvolumes?limit=500&resourceVersion=0": dial tcp 192.168.39.57:8443: connect: connection refused
	W0805 23:22:42.570109       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicaSet: Get "https://192.168.39.57:8443/apis/apps/v1/replicasets?limit=500&resourceVersion=0": dial tcp 192.168.39.57:8443: connect: connection refused
	E0805 23:22:42.570180       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: Get "https://192.168.39.57:8443/apis/apps/v1/replicasets?limit=500&resourceVersion=0": dial tcp 192.168.39.57:8443: connect: connection refused
	W0805 23:22:44.707475       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E0805 23:22:44.707652       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	W0805 23:22:44.707887       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0805 23:22:44.707974       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	W0805 23:22:44.708083       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0805 23:22:44.708179       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	I0805 23:22:45.761153       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Aug 05 23:22:42 ha-044175 kubelet[1390]: I0805 23:22:42.711886    1390 scope.go:117] "RemoveContainer" containerID="95d6da5b264d99c2ae66291b9df0943d6f8ac4b1743a5bef2caebaaa9fa1694c"
	Aug 05 23:22:44 ha-044175 kubelet[1390]: I0805 23:22:44.699464    1390 scope.go:117] "RemoveContainer" containerID="bb38cdefb5246fc31da8b49e32a081eb2003b9c9de9c7c5941b6e563179848e7"
	Aug 05 23:22:44 ha-044175 kubelet[1390]: E0805 23:22:44.757021    1390 iptables.go:577] "Could not set up iptables canary" err=<
	Aug 05 23:22:44 ha-044175 kubelet[1390]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Aug 05 23:22:44 ha-044175 kubelet[1390]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Aug 05 23:22:44 ha-044175 kubelet[1390]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Aug 05 23:22:44 ha-044175 kubelet[1390]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Aug 05 23:22:49 ha-044175 kubelet[1390]: I0805 23:22:49.711532    1390 scope.go:117] "RemoveContainer" containerID="dd436770dad332628ad6a3b7fea663d52dda62901d07f6c1bfa5cf82ddae4f61"
	Aug 05 23:22:52 ha-044175 kubelet[1390]: I0805 23:22:52.711969    1390 scope.go:117] "RemoveContainer" containerID="224c4967d5e92ceb088df11f70040bbd62d3bf073b04182cb32278b2db2419b1"
	Aug 05 23:22:52 ha-044175 kubelet[1390]: E0805 23:22:52.712831    1390 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"storage-provisioner\" with CrashLoopBackOff: \"back-off 40s restarting failed container=storage-provisioner pod=storage-provisioner_kube-system(d30d1a5b-cfbe-4de6-a964-75c32e5dbf62)\"" pod="kube-system/storage-provisioner" podUID="d30d1a5b-cfbe-4de6-a964-75c32e5dbf62"
	Aug 05 23:23:07 ha-044175 kubelet[1390]: I0805 23:23:07.711763    1390 scope.go:117] "RemoveContainer" containerID="224c4967d5e92ceb088df11f70040bbd62d3bf073b04182cb32278b2db2419b1"
	Aug 05 23:23:08 ha-044175 kubelet[1390]: I0805 23:23:08.484987    1390 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="default/busybox-fc5497c4f-wmfql" podStartSLOduration=570.204277688 podStartE2EDuration="9m32.484956156s" podCreationTimestamp="2024-08-05 23:13:36 +0000 UTC" firstStartedPulling="2024-08-05 23:13:36.63016347 +0000 UTC m=+172.132535916" lastFinishedPulling="2024-08-05 23:13:38.910841959 +0000 UTC m=+174.413214384" observedRunningTime="2024-08-05 23:13:39.482313741 +0000 UTC m=+174.984686186" watchObservedRunningTime="2024-08-05 23:23:08.484956156 +0000 UTC m=+743.987328599"
	Aug 05 23:23:34 ha-044175 kubelet[1390]: I0805 23:23:34.711459    1390 kubelet.go:1917] "Trying to delete pod" pod="kube-system/kube-vip-ha-044175" podUID="505ff885-b8a0-48bd-8d1e-81e4583b48af"
	Aug 05 23:23:34 ha-044175 kubelet[1390]: I0805 23:23:34.733131    1390 kubelet.go:1922] "Deleted mirror pod because it is outdated" pod="kube-system/kube-vip-ha-044175"
	Aug 05 23:23:43 ha-044175 kubelet[1390]: I0805 23:23:43.667600    1390 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-vip-ha-044175" podStartSLOduration=9.667493547 podStartE2EDuration="9.667493547s" podCreationTimestamp="2024-08-05 23:23:34 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-08-05 23:23:43.667209348 +0000 UTC m=+779.169581793" watchObservedRunningTime="2024-08-05 23:23:43.667493547 +0000 UTC m=+779.169865993"
	Aug 05 23:23:44 ha-044175 kubelet[1390]: E0805 23:23:44.737044    1390 iptables.go:577] "Could not set up iptables canary" err=<
	Aug 05 23:23:44 ha-044175 kubelet[1390]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Aug 05 23:23:44 ha-044175 kubelet[1390]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Aug 05 23:23:44 ha-044175 kubelet[1390]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Aug 05 23:23:44 ha-044175 kubelet[1390]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Aug 05 23:24:44 ha-044175 kubelet[1390]: E0805 23:24:44.739120    1390 iptables.go:577] "Could not set up iptables canary" err=<
	Aug 05 23:24:44 ha-044175 kubelet[1390]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Aug 05 23:24:44 ha-044175 kubelet[1390]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Aug 05 23:24:44 ha-044175 kubelet[1390]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Aug 05 23:24:44 ha-044175 kubelet[1390]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	

                                                
                                                
-- /stdout --
** stderr ** 
	E0805 23:25:16.056007   37579 logs.go:258] failed to output last start logs: failed to read file /home/jenkins/minikube-integration/19373-9606/.minikube/logs/lastStart.txt: bufio.Scanner: token too long

                                                
                                                
** /stderr **
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p ha-044175 -n ha-044175
helpers_test.go:261: (dbg) Run:  kubectl --context ha-044175 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestMultiControlPlane/serial/RestartClusterKeepsNodes FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
--- FAIL: TestMultiControlPlane/serial/RestartClusterKeepsNodes (429.18s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StopCluster (141.61s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StopCluster
ha_test.go:531: (dbg) Run:  out/minikube-linux-amd64 -p ha-044175 stop -v=7 --alsologtostderr
E0805 23:26:49.980860   16792 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19373-9606/.minikube/profiles/functional-299463/client.crt: no such file or directory
ha_test.go:531: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ha-044175 stop -v=7 --alsologtostderr: exit status 82 (2m0.467678015s)

                                                
                                                
-- stdout --
	* Stopping node "ha-044175-m04"  ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0805 23:25:36.207805   37991 out.go:291] Setting OutFile to fd 1 ...
	I0805 23:25:36.208043   37991 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0805 23:25:36.208054   37991 out.go:304] Setting ErrFile to fd 2...
	I0805 23:25:36.208058   37991 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0805 23:25:36.208543   37991 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19373-9606/.minikube/bin
	I0805 23:25:36.209071   37991 out.go:298] Setting JSON to false
	I0805 23:25:36.209239   37991 mustload.go:65] Loading cluster: ha-044175
	I0805 23:25:36.209595   37991 config.go:182] Loaded profile config "ha-044175": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0805 23:25:36.209675   37991 profile.go:143] Saving config to /home/jenkins/minikube-integration/19373-9606/.minikube/profiles/ha-044175/config.json ...
	I0805 23:25:36.209862   37991 mustload.go:65] Loading cluster: ha-044175
	I0805 23:25:36.209983   37991 config.go:182] Loaded profile config "ha-044175": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0805 23:25:36.210015   37991 stop.go:39] StopHost: ha-044175-m04
	I0805 23:25:36.210360   37991 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0805 23:25:36.210407   37991 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0805 23:25:36.225557   37991 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35149
	I0805 23:25:36.225980   37991 main.go:141] libmachine: () Calling .GetVersion
	I0805 23:25:36.226501   37991 main.go:141] libmachine: Using API Version  1
	I0805 23:25:36.226520   37991 main.go:141] libmachine: () Calling .SetConfigRaw
	I0805 23:25:36.226896   37991 main.go:141] libmachine: () Calling .GetMachineName
	I0805 23:25:36.229152   37991 out.go:177] * Stopping node "ha-044175-m04"  ...
	I0805 23:25:36.230409   37991 machine.go:157] backing up vm config to /var/lib/minikube/backup: [/etc/cni /etc/kubernetes]
	I0805 23:25:36.230432   37991 main.go:141] libmachine: (ha-044175-m04) Calling .DriverName
	I0805 23:25:36.230657   37991 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/backup
	I0805 23:25:36.230678   37991 main.go:141] libmachine: (ha-044175-m04) Calling .GetSSHHostname
	I0805 23:25:36.233997   37991 main.go:141] libmachine: (ha-044175-m04) DBG | domain ha-044175-m04 has defined MAC address 52:54:00:e5:ba:4d in network mk-ha-044175
	I0805 23:25:36.234542   37991 main.go:141] libmachine: (ha-044175-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e5:ba:4d", ip: ""} in network mk-ha-044175: {Iface:virbr1 ExpiryTime:2024-08-06 00:25:03 +0000 UTC Type:0 Mac:52:54:00:e5:ba:4d Iaid: IPaddr:192.168.39.228 Prefix:24 Hostname:ha-044175-m04 Clientid:01:52:54:00:e5:ba:4d}
	I0805 23:25:36.234577   37991 main.go:141] libmachine: (ha-044175-m04) DBG | domain ha-044175-m04 has defined IP address 192.168.39.228 and MAC address 52:54:00:e5:ba:4d in network mk-ha-044175
	I0805 23:25:36.234786   37991 main.go:141] libmachine: (ha-044175-m04) Calling .GetSSHPort
	I0805 23:25:36.234936   37991 main.go:141] libmachine: (ha-044175-m04) Calling .GetSSHKeyPath
	I0805 23:25:36.235093   37991 main.go:141] libmachine: (ha-044175-m04) Calling .GetSSHUsername
	I0805 23:25:36.235193   37991 sshutil.go:53] new ssh client: &{IP:192.168.39.228 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19373-9606/.minikube/machines/ha-044175-m04/id_rsa Username:docker}
	I0805 23:25:36.319610   37991 ssh_runner.go:195] Run: sudo rsync --archive --relative /etc/cni /var/lib/minikube/backup
	I0805 23:25:36.373348   37991 ssh_runner.go:195] Run: sudo rsync --archive --relative /etc/kubernetes /var/lib/minikube/backup
	I0805 23:25:36.427988   37991 main.go:141] libmachine: Stopping "ha-044175-m04"...
	I0805 23:25:36.428013   37991 main.go:141] libmachine: (ha-044175-m04) Calling .GetState
	I0805 23:25:36.429439   37991 main.go:141] libmachine: (ha-044175-m04) Calling .Stop
	I0805 23:25:36.432707   37991 main.go:141] libmachine: (ha-044175-m04) Waiting for machine to stop 0/120
	I0805 23:25:37.434134   37991 main.go:141] libmachine: (ha-044175-m04) Waiting for machine to stop 1/120
	I0805 23:25:38.435729   37991 main.go:141] libmachine: (ha-044175-m04) Waiting for machine to stop 2/120
	I0805 23:25:39.437561   37991 main.go:141] libmachine: (ha-044175-m04) Waiting for machine to stop 3/120
	I0805 23:25:40.439147   37991 main.go:141] libmachine: (ha-044175-m04) Waiting for machine to stop 4/120
	I0805 23:25:41.440797   37991 main.go:141] libmachine: (ha-044175-m04) Waiting for machine to stop 5/120
	I0805 23:25:42.442518   37991 main.go:141] libmachine: (ha-044175-m04) Waiting for machine to stop 6/120
	I0805 23:25:43.443923   37991 main.go:141] libmachine: (ha-044175-m04) Waiting for machine to stop 7/120
	I0805 23:25:44.445301   37991 main.go:141] libmachine: (ha-044175-m04) Waiting for machine to stop 8/120
	I0805 23:25:45.447693   37991 main.go:141] libmachine: (ha-044175-m04) Waiting for machine to stop 9/120
	I0805 23:25:46.449937   37991 main.go:141] libmachine: (ha-044175-m04) Waiting for machine to stop 10/120
	I0805 23:25:47.451238   37991 main.go:141] libmachine: (ha-044175-m04) Waiting for machine to stop 11/120
	I0805 23:25:48.452572   37991 main.go:141] libmachine: (ha-044175-m04) Waiting for machine to stop 12/120
	I0805 23:25:49.453894   37991 main.go:141] libmachine: (ha-044175-m04) Waiting for machine to stop 13/120
	I0805 23:25:50.455396   37991 main.go:141] libmachine: (ha-044175-m04) Waiting for machine to stop 14/120
	I0805 23:25:51.457305   37991 main.go:141] libmachine: (ha-044175-m04) Waiting for machine to stop 15/120
	I0805 23:25:52.458463   37991 main.go:141] libmachine: (ha-044175-m04) Waiting for machine to stop 16/120
	I0805 23:25:53.459988   37991 main.go:141] libmachine: (ha-044175-m04) Waiting for machine to stop 17/120
	I0805 23:25:54.462293   37991 main.go:141] libmachine: (ha-044175-m04) Waiting for machine to stop 18/120
	I0805 23:25:55.463555   37991 main.go:141] libmachine: (ha-044175-m04) Waiting for machine to stop 19/120
	I0805 23:25:56.465466   37991 main.go:141] libmachine: (ha-044175-m04) Waiting for machine to stop 20/120
	I0805 23:25:57.467144   37991 main.go:141] libmachine: (ha-044175-m04) Waiting for machine to stop 21/120
	I0805 23:25:58.468288   37991 main.go:141] libmachine: (ha-044175-m04) Waiting for machine to stop 22/120
	I0805 23:25:59.469808   37991 main.go:141] libmachine: (ha-044175-m04) Waiting for machine to stop 23/120
	I0805 23:26:00.471758   37991 main.go:141] libmachine: (ha-044175-m04) Waiting for machine to stop 24/120
	I0805 23:26:01.473561   37991 main.go:141] libmachine: (ha-044175-m04) Waiting for machine to stop 25/120
	I0805 23:26:02.475042   37991 main.go:141] libmachine: (ha-044175-m04) Waiting for machine to stop 26/120
	I0805 23:26:03.476461   37991 main.go:141] libmachine: (ha-044175-m04) Waiting for machine to stop 27/120
	I0805 23:26:04.477863   37991 main.go:141] libmachine: (ha-044175-m04) Waiting for machine to stop 28/120
	I0805 23:26:05.479008   37991 main.go:141] libmachine: (ha-044175-m04) Waiting for machine to stop 29/120
	I0805 23:26:06.481122   37991 main.go:141] libmachine: (ha-044175-m04) Waiting for machine to stop 30/120
	I0805 23:26:07.482426   37991 main.go:141] libmachine: (ha-044175-m04) Waiting for machine to stop 31/120
	I0805 23:26:08.484282   37991 main.go:141] libmachine: (ha-044175-m04) Waiting for machine to stop 32/120
	I0805 23:26:09.486378   37991 main.go:141] libmachine: (ha-044175-m04) Waiting for machine to stop 33/120
	I0805 23:26:10.488318   37991 main.go:141] libmachine: (ha-044175-m04) Waiting for machine to stop 34/120
	I0805 23:26:11.490009   37991 main.go:141] libmachine: (ha-044175-m04) Waiting for machine to stop 35/120
	I0805 23:26:12.491407   37991 main.go:141] libmachine: (ha-044175-m04) Waiting for machine to stop 36/120
	I0805 23:26:13.492621   37991 main.go:141] libmachine: (ha-044175-m04) Waiting for machine to stop 37/120
	I0805 23:26:14.494034   37991 main.go:141] libmachine: (ha-044175-m04) Waiting for machine to stop 38/120
	I0805 23:26:15.495419   37991 main.go:141] libmachine: (ha-044175-m04) Waiting for machine to stop 39/120
	I0805 23:26:16.497492   37991 main.go:141] libmachine: (ha-044175-m04) Waiting for machine to stop 40/120
	I0805 23:26:17.498913   37991 main.go:141] libmachine: (ha-044175-m04) Waiting for machine to stop 41/120
	I0805 23:26:18.500590   37991 main.go:141] libmachine: (ha-044175-m04) Waiting for machine to stop 42/120
	I0805 23:26:19.501896   37991 main.go:141] libmachine: (ha-044175-m04) Waiting for machine to stop 43/120
	I0805 23:26:20.503383   37991 main.go:141] libmachine: (ha-044175-m04) Waiting for machine to stop 44/120
	I0805 23:26:21.505131   37991 main.go:141] libmachine: (ha-044175-m04) Waiting for machine to stop 45/120
	I0805 23:26:22.506529   37991 main.go:141] libmachine: (ha-044175-m04) Waiting for machine to stop 46/120
	I0805 23:26:23.508139   37991 main.go:141] libmachine: (ha-044175-m04) Waiting for machine to stop 47/120
	I0805 23:26:24.509769   37991 main.go:141] libmachine: (ha-044175-m04) Waiting for machine to stop 48/120
	I0805 23:26:25.511454   37991 main.go:141] libmachine: (ha-044175-m04) Waiting for machine to stop 49/120
	I0805 23:26:26.513756   37991 main.go:141] libmachine: (ha-044175-m04) Waiting for machine to stop 50/120
	I0805 23:26:27.515208   37991 main.go:141] libmachine: (ha-044175-m04) Waiting for machine to stop 51/120
	I0805 23:26:28.516542   37991 main.go:141] libmachine: (ha-044175-m04) Waiting for machine to stop 52/120
	I0805 23:26:29.517920   37991 main.go:141] libmachine: (ha-044175-m04) Waiting for machine to stop 53/120
	I0805 23:26:30.519498   37991 main.go:141] libmachine: (ha-044175-m04) Waiting for machine to stop 54/120
	I0805 23:26:31.521796   37991 main.go:141] libmachine: (ha-044175-m04) Waiting for machine to stop 55/120
	I0805 23:26:32.523184   37991 main.go:141] libmachine: (ha-044175-m04) Waiting for machine to stop 56/120
	I0805 23:26:33.524528   37991 main.go:141] libmachine: (ha-044175-m04) Waiting for machine to stop 57/120
	I0805 23:26:34.526315   37991 main.go:141] libmachine: (ha-044175-m04) Waiting for machine to stop 58/120
	I0805 23:26:35.527641   37991 main.go:141] libmachine: (ha-044175-m04) Waiting for machine to stop 59/120
	I0805 23:26:36.529493   37991 main.go:141] libmachine: (ha-044175-m04) Waiting for machine to stop 60/120
	I0805 23:26:37.530679   37991 main.go:141] libmachine: (ha-044175-m04) Waiting for machine to stop 61/120
	I0805 23:26:38.531973   37991 main.go:141] libmachine: (ha-044175-m04) Waiting for machine to stop 62/120
	I0805 23:26:39.533336   37991 main.go:141] libmachine: (ha-044175-m04) Waiting for machine to stop 63/120
	I0805 23:26:40.534830   37991 main.go:141] libmachine: (ha-044175-m04) Waiting for machine to stop 64/120
	I0805 23:26:41.536327   37991 main.go:141] libmachine: (ha-044175-m04) Waiting for machine to stop 65/120
	I0805 23:26:42.538064   37991 main.go:141] libmachine: (ha-044175-m04) Waiting for machine to stop 66/120
	I0805 23:26:43.539770   37991 main.go:141] libmachine: (ha-044175-m04) Waiting for machine to stop 67/120
	I0805 23:26:44.541149   37991 main.go:141] libmachine: (ha-044175-m04) Waiting for machine to stop 68/120
	I0805 23:26:45.542641   37991 main.go:141] libmachine: (ha-044175-m04) Waiting for machine to stop 69/120
	I0805 23:26:46.544738   37991 main.go:141] libmachine: (ha-044175-m04) Waiting for machine to stop 70/120
	I0805 23:26:47.546332   37991 main.go:141] libmachine: (ha-044175-m04) Waiting for machine to stop 71/120
	I0805 23:26:48.547781   37991 main.go:141] libmachine: (ha-044175-m04) Waiting for machine to stop 72/120
	I0805 23:26:49.549191   37991 main.go:141] libmachine: (ha-044175-m04) Waiting for machine to stop 73/120
	I0805 23:26:50.551082   37991 main.go:141] libmachine: (ha-044175-m04) Waiting for machine to stop 74/120
	I0805 23:26:51.552735   37991 main.go:141] libmachine: (ha-044175-m04) Waiting for machine to stop 75/120
	I0805 23:26:52.554094   37991 main.go:141] libmachine: (ha-044175-m04) Waiting for machine to stop 76/120
	I0805 23:26:53.555470   37991 main.go:141] libmachine: (ha-044175-m04) Waiting for machine to stop 77/120
	I0805 23:26:54.556791   37991 main.go:141] libmachine: (ha-044175-m04) Waiting for machine to stop 78/120
	I0805 23:26:55.558346   37991 main.go:141] libmachine: (ha-044175-m04) Waiting for machine to stop 79/120
	I0805 23:26:56.560341   37991 main.go:141] libmachine: (ha-044175-m04) Waiting for machine to stop 80/120
	I0805 23:26:57.561897   37991 main.go:141] libmachine: (ha-044175-m04) Waiting for machine to stop 81/120
	I0805 23:26:58.563189   37991 main.go:141] libmachine: (ha-044175-m04) Waiting for machine to stop 82/120
	I0805 23:26:59.564524   37991 main.go:141] libmachine: (ha-044175-m04) Waiting for machine to stop 83/120
	I0805 23:27:00.565907   37991 main.go:141] libmachine: (ha-044175-m04) Waiting for machine to stop 84/120
	I0805 23:27:01.568170   37991 main.go:141] libmachine: (ha-044175-m04) Waiting for machine to stop 85/120
	I0805 23:27:02.569468   37991 main.go:141] libmachine: (ha-044175-m04) Waiting for machine to stop 86/120
	I0805 23:27:03.570946   37991 main.go:141] libmachine: (ha-044175-m04) Waiting for machine to stop 87/120
	I0805 23:27:04.572403   37991 main.go:141] libmachine: (ha-044175-m04) Waiting for machine to stop 88/120
	I0805 23:27:05.573797   37991 main.go:141] libmachine: (ha-044175-m04) Waiting for machine to stop 89/120
	I0805 23:27:06.575350   37991 main.go:141] libmachine: (ha-044175-m04) Waiting for machine to stop 90/120
	I0805 23:27:07.577557   37991 main.go:141] libmachine: (ha-044175-m04) Waiting for machine to stop 91/120
	I0805 23:27:08.579143   37991 main.go:141] libmachine: (ha-044175-m04) Waiting for machine to stop 92/120
	I0805 23:27:09.580531   37991 main.go:141] libmachine: (ha-044175-m04) Waiting for machine to stop 93/120
	I0805 23:27:10.582498   37991 main.go:141] libmachine: (ha-044175-m04) Waiting for machine to stop 94/120
	I0805 23:27:11.584432   37991 main.go:141] libmachine: (ha-044175-m04) Waiting for machine to stop 95/120
	I0805 23:27:12.586040   37991 main.go:141] libmachine: (ha-044175-m04) Waiting for machine to stop 96/120
	I0805 23:27:13.588088   37991 main.go:141] libmachine: (ha-044175-m04) Waiting for machine to stop 97/120
	I0805 23:27:14.589677   37991 main.go:141] libmachine: (ha-044175-m04) Waiting for machine to stop 98/120
	I0805 23:27:15.591037   37991 main.go:141] libmachine: (ha-044175-m04) Waiting for machine to stop 99/120
	I0805 23:27:16.593173   37991 main.go:141] libmachine: (ha-044175-m04) Waiting for machine to stop 100/120
	I0805 23:27:17.594964   37991 main.go:141] libmachine: (ha-044175-m04) Waiting for machine to stop 101/120
	I0805 23:27:18.596375   37991 main.go:141] libmachine: (ha-044175-m04) Waiting for machine to stop 102/120
	I0805 23:27:19.598108   37991 main.go:141] libmachine: (ha-044175-m04) Waiting for machine to stop 103/120
	I0805 23:27:20.600404   37991 main.go:141] libmachine: (ha-044175-m04) Waiting for machine to stop 104/120
	I0805 23:27:21.602164   37991 main.go:141] libmachine: (ha-044175-m04) Waiting for machine to stop 105/120
	I0805 23:27:22.603553   37991 main.go:141] libmachine: (ha-044175-m04) Waiting for machine to stop 106/120
	I0805 23:27:23.605691   37991 main.go:141] libmachine: (ha-044175-m04) Waiting for machine to stop 107/120
	I0805 23:27:24.607014   37991 main.go:141] libmachine: (ha-044175-m04) Waiting for machine to stop 108/120
	I0805 23:27:25.608786   37991 main.go:141] libmachine: (ha-044175-m04) Waiting for machine to stop 109/120
	I0805 23:27:26.610803   37991 main.go:141] libmachine: (ha-044175-m04) Waiting for machine to stop 110/120
	I0805 23:27:27.612137   37991 main.go:141] libmachine: (ha-044175-m04) Waiting for machine to stop 111/120
	I0805 23:27:28.613301   37991 main.go:141] libmachine: (ha-044175-m04) Waiting for machine to stop 112/120
	I0805 23:27:29.614664   37991 main.go:141] libmachine: (ha-044175-m04) Waiting for machine to stop 113/120
	I0805 23:27:30.616112   37991 main.go:141] libmachine: (ha-044175-m04) Waiting for machine to stop 114/120
	I0805 23:27:31.617634   37991 main.go:141] libmachine: (ha-044175-m04) Waiting for machine to stop 115/120
	I0805 23:27:32.619891   37991 main.go:141] libmachine: (ha-044175-m04) Waiting for machine to stop 116/120
	I0805 23:27:33.621199   37991 main.go:141] libmachine: (ha-044175-m04) Waiting for machine to stop 117/120
	I0805 23:27:34.622427   37991 main.go:141] libmachine: (ha-044175-m04) Waiting for machine to stop 118/120
	I0805 23:27:35.623613   37991 main.go:141] libmachine: (ha-044175-m04) Waiting for machine to stop 119/120
	I0805 23:27:36.624181   37991 stop.go:66] stop err: unable to stop vm, current state "Running"
	W0805 23:27:36.624238   37991 stop.go:165] stop host returned error: Temporary Error: stop: unable to stop vm, current state "Running"
	I0805 23:27:36.626219   37991 out.go:177] 
	W0805 23:27:36.627701   37991 out.go:239] X Exiting due to GUEST_STOP_TIMEOUT: Unable to stop VM: Temporary Error: stop: unable to stop vm, current state "Running"
	X Exiting due to GUEST_STOP_TIMEOUT: Unable to stop VM: Temporary Error: stop: unable to stop vm, current state "Running"
	W0805 23:27:36.627720   37991 out.go:239] * 
	* 
	W0805 23:27:36.630086   37991 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_stop_24ef9d461bcadf056806ccb9bba8d5f9f54754a6_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_stop_24ef9d461bcadf056806ccb9bba8d5f9f54754a6_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0805 23:27:36.631590   37991 out.go:177] 

                                                
                                                
** /stderr **
ha_test.go:533: failed to stop cluster. args "out/minikube-linux-amd64 -p ha-044175 stop -v=7 --alsologtostderr": exit status 82
ha_test.go:537: (dbg) Run:  out/minikube-linux-amd64 -p ha-044175 status -v=7 --alsologtostderr
ha_test.go:537: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ha-044175 status -v=7 --alsologtostderr: exit status 3 (18.830031742s)

                                                
                                                
-- stdout --
	ha-044175
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-044175-m02
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-044175-m04
	type: Worker
	host: Error
	kubelet: Nonexistent
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0805 23:27:36.674793   38442 out.go:291] Setting OutFile to fd 1 ...
	I0805 23:27:36.675025   38442 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0805 23:27:36.675033   38442 out.go:304] Setting ErrFile to fd 2...
	I0805 23:27:36.675037   38442 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0805 23:27:36.675274   38442 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19373-9606/.minikube/bin
	I0805 23:27:36.675442   38442 out.go:298] Setting JSON to false
	I0805 23:27:36.675467   38442 mustload.go:65] Loading cluster: ha-044175
	I0805 23:27:36.675591   38442 notify.go:220] Checking for updates...
	I0805 23:27:36.675918   38442 config.go:182] Loaded profile config "ha-044175": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0805 23:27:36.675935   38442 status.go:255] checking status of ha-044175 ...
	I0805 23:27:36.676390   38442 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0805 23:27:36.676466   38442 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0805 23:27:36.695302   38442 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33807
	I0805 23:27:36.695709   38442 main.go:141] libmachine: () Calling .GetVersion
	I0805 23:27:36.696310   38442 main.go:141] libmachine: Using API Version  1
	I0805 23:27:36.696330   38442 main.go:141] libmachine: () Calling .SetConfigRaw
	I0805 23:27:36.696637   38442 main.go:141] libmachine: () Calling .GetMachineName
	I0805 23:27:36.696819   38442 main.go:141] libmachine: (ha-044175) Calling .GetState
	I0805 23:27:36.698542   38442 status.go:330] ha-044175 host status = "Running" (err=<nil>)
	I0805 23:27:36.698571   38442 host.go:66] Checking if "ha-044175" exists ...
	I0805 23:27:36.698903   38442 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0805 23:27:36.698935   38442 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0805 23:27:36.713100   38442 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40739
	I0805 23:27:36.713548   38442 main.go:141] libmachine: () Calling .GetVersion
	I0805 23:27:36.714019   38442 main.go:141] libmachine: Using API Version  1
	I0805 23:27:36.714041   38442 main.go:141] libmachine: () Calling .SetConfigRaw
	I0805 23:27:36.714392   38442 main.go:141] libmachine: () Calling .GetMachineName
	I0805 23:27:36.714567   38442 main.go:141] libmachine: (ha-044175) Calling .GetIP
	I0805 23:27:36.717313   38442 main.go:141] libmachine: (ha-044175) DBG | domain ha-044175 has defined MAC address 52:54:00:d0:5f:e4 in network mk-ha-044175
	I0805 23:27:36.717783   38442 main.go:141] libmachine: (ha-044175) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d0:5f:e4", ip: ""} in network mk-ha-044175: {Iface:virbr1 ExpiryTime:2024-08-06 00:10:15 +0000 UTC Type:0 Mac:52:54:00:d0:5f:e4 Iaid: IPaddr:192.168.39.57 Prefix:24 Hostname:ha-044175 Clientid:01:52:54:00:d0:5f:e4}
	I0805 23:27:36.717818   38442 main.go:141] libmachine: (ha-044175) DBG | domain ha-044175 has defined IP address 192.168.39.57 and MAC address 52:54:00:d0:5f:e4 in network mk-ha-044175
	I0805 23:27:36.717984   38442 host.go:66] Checking if "ha-044175" exists ...
	I0805 23:27:36.718365   38442 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0805 23:27:36.718447   38442 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0805 23:27:36.732716   38442 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40355
	I0805 23:27:36.733164   38442 main.go:141] libmachine: () Calling .GetVersion
	I0805 23:27:36.733692   38442 main.go:141] libmachine: Using API Version  1
	I0805 23:27:36.733713   38442 main.go:141] libmachine: () Calling .SetConfigRaw
	I0805 23:27:36.734014   38442 main.go:141] libmachine: () Calling .GetMachineName
	I0805 23:27:36.734181   38442 main.go:141] libmachine: (ha-044175) Calling .DriverName
	I0805 23:27:36.734374   38442 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0805 23:27:36.734396   38442 main.go:141] libmachine: (ha-044175) Calling .GetSSHHostname
	I0805 23:27:36.736858   38442 main.go:141] libmachine: (ha-044175) DBG | domain ha-044175 has defined MAC address 52:54:00:d0:5f:e4 in network mk-ha-044175
	I0805 23:27:36.737209   38442 main.go:141] libmachine: (ha-044175) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d0:5f:e4", ip: ""} in network mk-ha-044175: {Iface:virbr1 ExpiryTime:2024-08-06 00:10:15 +0000 UTC Type:0 Mac:52:54:00:d0:5f:e4 Iaid: IPaddr:192.168.39.57 Prefix:24 Hostname:ha-044175 Clientid:01:52:54:00:d0:5f:e4}
	I0805 23:27:36.737235   38442 main.go:141] libmachine: (ha-044175) DBG | domain ha-044175 has defined IP address 192.168.39.57 and MAC address 52:54:00:d0:5f:e4 in network mk-ha-044175
	I0805 23:27:36.737354   38442 main.go:141] libmachine: (ha-044175) Calling .GetSSHPort
	I0805 23:27:36.737515   38442 main.go:141] libmachine: (ha-044175) Calling .GetSSHKeyPath
	I0805 23:27:36.737660   38442 main.go:141] libmachine: (ha-044175) Calling .GetSSHUsername
	I0805 23:27:36.737801   38442 sshutil.go:53] new ssh client: &{IP:192.168.39.57 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19373-9606/.minikube/machines/ha-044175/id_rsa Username:docker}
	I0805 23:27:36.822023   38442 ssh_runner.go:195] Run: systemctl --version
	I0805 23:27:36.829161   38442 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0805 23:27:36.848265   38442 kubeconfig.go:125] found "ha-044175" server: "https://192.168.39.254:8443"
	I0805 23:27:36.848290   38442 api_server.go:166] Checking apiserver status ...
	I0805 23:27:36.848334   38442 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0805 23:27:36.864742   38442 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/5161/cgroup
	W0805 23:27:36.875700   38442 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/5161/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0805 23:27:36.875747   38442 ssh_runner.go:195] Run: ls
	I0805 23:27:36.881735   38442 api_server.go:253] Checking apiserver healthz at https://192.168.39.254:8443/healthz ...
	I0805 23:27:36.888768   38442 api_server.go:279] https://192.168.39.254:8443/healthz returned 200:
	ok
	I0805 23:27:36.888794   38442 status.go:422] ha-044175 apiserver status = Running (err=<nil>)
	I0805 23:27:36.888806   38442 status.go:257] ha-044175 status: &{Name:ha-044175 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0805 23:27:36.888840   38442 status.go:255] checking status of ha-044175-m02 ...
	I0805 23:27:36.889244   38442 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0805 23:27:36.889287   38442 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0805 23:27:36.903868   38442 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39737
	I0805 23:27:36.904362   38442 main.go:141] libmachine: () Calling .GetVersion
	I0805 23:27:36.904909   38442 main.go:141] libmachine: Using API Version  1
	I0805 23:27:36.904936   38442 main.go:141] libmachine: () Calling .SetConfigRaw
	I0805 23:27:36.905242   38442 main.go:141] libmachine: () Calling .GetMachineName
	I0805 23:27:36.905443   38442 main.go:141] libmachine: (ha-044175-m02) Calling .GetState
	I0805 23:27:36.907014   38442 status.go:330] ha-044175-m02 host status = "Running" (err=<nil>)
	I0805 23:27:36.907030   38442 host.go:66] Checking if "ha-044175-m02" exists ...
	I0805 23:27:36.907349   38442 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0805 23:27:36.907410   38442 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0805 23:27:36.921688   38442 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38925
	I0805 23:27:36.922033   38442 main.go:141] libmachine: () Calling .GetVersion
	I0805 23:27:36.922493   38442 main.go:141] libmachine: Using API Version  1
	I0805 23:27:36.922521   38442 main.go:141] libmachine: () Calling .SetConfigRaw
	I0805 23:27:36.922934   38442 main.go:141] libmachine: () Calling .GetMachineName
	I0805 23:27:36.923122   38442 main.go:141] libmachine: (ha-044175-m02) Calling .GetIP
	I0805 23:27:36.925790   38442 main.go:141] libmachine: (ha-044175-m02) DBG | domain ha-044175-m02 has defined MAC address 52:54:00:84:bb:47 in network mk-ha-044175
	I0805 23:27:36.926173   38442 main.go:141] libmachine: (ha-044175-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:84:bb:47", ip: ""} in network mk-ha-044175: {Iface:virbr1 ExpiryTime:2024-08-06 00:22:06 +0000 UTC Type:0 Mac:52:54:00:84:bb:47 Iaid: IPaddr:192.168.39.112 Prefix:24 Hostname:ha-044175-m02 Clientid:01:52:54:00:84:bb:47}
	I0805 23:27:36.926197   38442 main.go:141] libmachine: (ha-044175-m02) DBG | domain ha-044175-m02 has defined IP address 192.168.39.112 and MAC address 52:54:00:84:bb:47 in network mk-ha-044175
	I0805 23:27:36.926365   38442 host.go:66] Checking if "ha-044175-m02" exists ...
	I0805 23:27:36.926762   38442 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0805 23:27:36.926803   38442 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0805 23:27:36.941890   38442 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45739
	I0805 23:27:36.942408   38442 main.go:141] libmachine: () Calling .GetVersion
	I0805 23:27:36.942858   38442 main.go:141] libmachine: Using API Version  1
	I0805 23:27:36.942883   38442 main.go:141] libmachine: () Calling .SetConfigRaw
	I0805 23:27:36.943218   38442 main.go:141] libmachine: () Calling .GetMachineName
	I0805 23:27:36.943366   38442 main.go:141] libmachine: (ha-044175-m02) Calling .DriverName
	I0805 23:27:36.943538   38442 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0805 23:27:36.943559   38442 main.go:141] libmachine: (ha-044175-m02) Calling .GetSSHHostname
	I0805 23:27:36.946202   38442 main.go:141] libmachine: (ha-044175-m02) DBG | domain ha-044175-m02 has defined MAC address 52:54:00:84:bb:47 in network mk-ha-044175
	I0805 23:27:36.946579   38442 main.go:141] libmachine: (ha-044175-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:84:bb:47", ip: ""} in network mk-ha-044175: {Iface:virbr1 ExpiryTime:2024-08-06 00:22:06 +0000 UTC Type:0 Mac:52:54:00:84:bb:47 Iaid: IPaddr:192.168.39.112 Prefix:24 Hostname:ha-044175-m02 Clientid:01:52:54:00:84:bb:47}
	I0805 23:27:36.946620   38442 main.go:141] libmachine: (ha-044175-m02) DBG | domain ha-044175-m02 has defined IP address 192.168.39.112 and MAC address 52:54:00:84:bb:47 in network mk-ha-044175
	I0805 23:27:36.946721   38442 main.go:141] libmachine: (ha-044175-m02) Calling .GetSSHPort
	I0805 23:27:36.946875   38442 main.go:141] libmachine: (ha-044175-m02) Calling .GetSSHKeyPath
	I0805 23:27:36.947017   38442 main.go:141] libmachine: (ha-044175-m02) Calling .GetSSHUsername
	I0805 23:27:36.947173   38442 sshutil.go:53] new ssh client: &{IP:192.168.39.112 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19373-9606/.minikube/machines/ha-044175-m02/id_rsa Username:docker}
	I0805 23:27:37.031945   38442 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0805 23:27:37.051972   38442 kubeconfig.go:125] found "ha-044175" server: "https://192.168.39.254:8443"
	I0805 23:27:37.052007   38442 api_server.go:166] Checking apiserver status ...
	I0805 23:27:37.052047   38442 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0805 23:27:37.066845   38442 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1372/cgroup
	W0805 23:27:37.078082   38442 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1372/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0805 23:27:37.078159   38442 ssh_runner.go:195] Run: ls
	I0805 23:27:37.084321   38442 api_server.go:253] Checking apiserver healthz at https://192.168.39.254:8443/healthz ...
	I0805 23:27:37.088900   38442 api_server.go:279] https://192.168.39.254:8443/healthz returned 200:
	ok
	I0805 23:27:37.088925   38442 status.go:422] ha-044175-m02 apiserver status = Running (err=<nil>)
	I0805 23:27:37.088935   38442 status.go:257] ha-044175-m02 status: &{Name:ha-044175-m02 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0805 23:27:37.088947   38442 status.go:255] checking status of ha-044175-m04 ...
	I0805 23:27:37.089338   38442 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0805 23:27:37.089385   38442 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0805 23:27:37.103962   38442 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35147
	I0805 23:27:37.104377   38442 main.go:141] libmachine: () Calling .GetVersion
	I0805 23:27:37.104837   38442 main.go:141] libmachine: Using API Version  1
	I0805 23:27:37.104857   38442 main.go:141] libmachine: () Calling .SetConfigRaw
	I0805 23:27:37.105156   38442 main.go:141] libmachine: () Calling .GetMachineName
	I0805 23:27:37.105394   38442 main.go:141] libmachine: (ha-044175-m04) Calling .GetState
	I0805 23:27:37.106803   38442 status.go:330] ha-044175-m04 host status = "Running" (err=<nil>)
	I0805 23:27:37.106818   38442 host.go:66] Checking if "ha-044175-m04" exists ...
	I0805 23:27:37.107167   38442 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0805 23:27:37.107225   38442 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0805 23:27:37.121132   38442 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34967
	I0805 23:27:37.121523   38442 main.go:141] libmachine: () Calling .GetVersion
	I0805 23:27:37.121984   38442 main.go:141] libmachine: Using API Version  1
	I0805 23:27:37.122002   38442 main.go:141] libmachine: () Calling .SetConfigRaw
	I0805 23:27:37.122294   38442 main.go:141] libmachine: () Calling .GetMachineName
	I0805 23:27:37.122453   38442 main.go:141] libmachine: (ha-044175-m04) Calling .GetIP
	I0805 23:27:37.124934   38442 main.go:141] libmachine: (ha-044175-m04) DBG | domain ha-044175-m04 has defined MAC address 52:54:00:e5:ba:4d in network mk-ha-044175
	I0805 23:27:37.125310   38442 main.go:141] libmachine: (ha-044175-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e5:ba:4d", ip: ""} in network mk-ha-044175: {Iface:virbr1 ExpiryTime:2024-08-06 00:25:03 +0000 UTC Type:0 Mac:52:54:00:e5:ba:4d Iaid: IPaddr:192.168.39.228 Prefix:24 Hostname:ha-044175-m04 Clientid:01:52:54:00:e5:ba:4d}
	I0805 23:27:37.125341   38442 main.go:141] libmachine: (ha-044175-m04) DBG | domain ha-044175-m04 has defined IP address 192.168.39.228 and MAC address 52:54:00:e5:ba:4d in network mk-ha-044175
	I0805 23:27:37.125469   38442 host.go:66] Checking if "ha-044175-m04" exists ...
	I0805 23:27:37.125750   38442 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0805 23:27:37.125783   38442 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0805 23:27:37.140373   38442 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42759
	I0805 23:27:37.140759   38442 main.go:141] libmachine: () Calling .GetVersion
	I0805 23:27:37.141181   38442 main.go:141] libmachine: Using API Version  1
	I0805 23:27:37.141204   38442 main.go:141] libmachine: () Calling .SetConfigRaw
	I0805 23:27:37.141546   38442 main.go:141] libmachine: () Calling .GetMachineName
	I0805 23:27:37.141738   38442 main.go:141] libmachine: (ha-044175-m04) Calling .DriverName
	I0805 23:27:37.141896   38442 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0805 23:27:37.141918   38442 main.go:141] libmachine: (ha-044175-m04) Calling .GetSSHHostname
	I0805 23:27:37.144612   38442 main.go:141] libmachine: (ha-044175-m04) DBG | domain ha-044175-m04 has defined MAC address 52:54:00:e5:ba:4d in network mk-ha-044175
	I0805 23:27:37.144954   38442 main.go:141] libmachine: (ha-044175-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e5:ba:4d", ip: ""} in network mk-ha-044175: {Iface:virbr1 ExpiryTime:2024-08-06 00:25:03 +0000 UTC Type:0 Mac:52:54:00:e5:ba:4d Iaid: IPaddr:192.168.39.228 Prefix:24 Hostname:ha-044175-m04 Clientid:01:52:54:00:e5:ba:4d}
	I0805 23:27:37.144986   38442 main.go:141] libmachine: (ha-044175-m04) DBG | domain ha-044175-m04 has defined IP address 192.168.39.228 and MAC address 52:54:00:e5:ba:4d in network mk-ha-044175
	I0805 23:27:37.145108   38442 main.go:141] libmachine: (ha-044175-m04) Calling .GetSSHPort
	I0805 23:27:37.145268   38442 main.go:141] libmachine: (ha-044175-m04) Calling .GetSSHKeyPath
	I0805 23:27:37.145426   38442 main.go:141] libmachine: (ha-044175-m04) Calling .GetSSHUsername
	I0805 23:27:37.145554   38442 sshutil.go:53] new ssh client: &{IP:192.168.39.228 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19373-9606/.minikube/machines/ha-044175-m04/id_rsa Username:docker}
	W0805 23:27:55.463237   38442 sshutil.go:64] dial failure (will retry): dial tcp 192.168.39.228:22: connect: no route to host
	W0805 23:27:55.463338   38442 start.go:268] error running df -h /var: NewSession: new client: new client: dial tcp 192.168.39.228:22: connect: no route to host
	E0805 23:27:55.463357   38442 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.39.228:22: connect: no route to host
	I0805 23:27:55.463370   38442 status.go:257] ha-044175-m04 status: &{Name:ha-044175-m04 Host:Error Kubelet:Nonexistent APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}
	E0805 23:27:55.463394   38442 status.go:260] status error: NewSession: new client: new client: dial tcp 192.168.39.228:22: connect: no route to host

                                                
                                                
** /stderr **
ha_test.go:540: failed to run minikube status. args "out/minikube-linux-amd64 -p ha-044175 status -v=7 --alsologtostderr" : exit status 3
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p ha-044175 -n ha-044175
helpers_test.go:244: <<< TestMultiControlPlane/serial/StopCluster FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestMultiControlPlane/serial/StopCluster]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p ha-044175 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p ha-044175 logs -n 25: (1.708176188s)
helpers_test.go:252: TestMultiControlPlane/serial/StopCluster logs: 
-- stdout --
	
	==> Audit <==
	|---------|----------------------------------------------------------------------------------|-----------|---------|---------|---------------------|---------------------|
	| Command |                                       Args                                       |  Profile  |  User   | Version |     Start Time      |      End Time       |
	|---------|----------------------------------------------------------------------------------|-----------|---------|---------|---------------------|---------------------|
	| ssh     | ha-044175 ssh -n ha-044175-m02 sudo cat                                          | ha-044175 | jenkins | v1.33.1 | 05 Aug 24 23:14 UTC | 05 Aug 24 23:14 UTC |
	|         | /home/docker/cp-test_ha-044175-m03_ha-044175-m02.txt                             |           |         |         |                     |                     |
	| cp      | ha-044175 cp ha-044175-m03:/home/docker/cp-test.txt                              | ha-044175 | jenkins | v1.33.1 | 05 Aug 24 23:14 UTC | 05 Aug 24 23:14 UTC |
	|         | ha-044175-m04:/home/docker/cp-test_ha-044175-m03_ha-044175-m04.txt               |           |         |         |                     |                     |
	| ssh     | ha-044175 ssh -n                                                                 | ha-044175 | jenkins | v1.33.1 | 05 Aug 24 23:14 UTC | 05 Aug 24 23:14 UTC |
	|         | ha-044175-m03 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-044175 ssh -n ha-044175-m04 sudo cat                                          | ha-044175 | jenkins | v1.33.1 | 05 Aug 24 23:14 UTC | 05 Aug 24 23:14 UTC |
	|         | /home/docker/cp-test_ha-044175-m03_ha-044175-m04.txt                             |           |         |         |                     |                     |
	| cp      | ha-044175 cp testdata/cp-test.txt                                                | ha-044175 | jenkins | v1.33.1 | 05 Aug 24 23:14 UTC | 05 Aug 24 23:14 UTC |
	|         | ha-044175-m04:/home/docker/cp-test.txt                                           |           |         |         |                     |                     |
	| ssh     | ha-044175 ssh -n                                                                 | ha-044175 | jenkins | v1.33.1 | 05 Aug 24 23:14 UTC | 05 Aug 24 23:14 UTC |
	|         | ha-044175-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| cp      | ha-044175 cp ha-044175-m04:/home/docker/cp-test.txt                              | ha-044175 | jenkins | v1.33.1 | 05 Aug 24 23:14 UTC | 05 Aug 24 23:14 UTC |
	|         | /tmp/TestMultiControlPlaneserialCopyFile3481107746/001/cp-test_ha-044175-m04.txt |           |         |         |                     |                     |
	| ssh     | ha-044175 ssh -n                                                                 | ha-044175 | jenkins | v1.33.1 | 05 Aug 24 23:14 UTC | 05 Aug 24 23:14 UTC |
	|         | ha-044175-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| cp      | ha-044175 cp ha-044175-m04:/home/docker/cp-test.txt                              | ha-044175 | jenkins | v1.33.1 | 05 Aug 24 23:14 UTC | 05 Aug 24 23:14 UTC |
	|         | ha-044175:/home/docker/cp-test_ha-044175-m04_ha-044175.txt                       |           |         |         |                     |                     |
	| ssh     | ha-044175 ssh -n                                                                 | ha-044175 | jenkins | v1.33.1 | 05 Aug 24 23:14 UTC | 05 Aug 24 23:14 UTC |
	|         | ha-044175-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-044175 ssh -n ha-044175 sudo cat                                              | ha-044175 | jenkins | v1.33.1 | 05 Aug 24 23:14 UTC | 05 Aug 24 23:14 UTC |
	|         | /home/docker/cp-test_ha-044175-m04_ha-044175.txt                                 |           |         |         |                     |                     |
	| cp      | ha-044175 cp ha-044175-m04:/home/docker/cp-test.txt                              | ha-044175 | jenkins | v1.33.1 | 05 Aug 24 23:14 UTC | 05 Aug 24 23:14 UTC |
	|         | ha-044175-m02:/home/docker/cp-test_ha-044175-m04_ha-044175-m02.txt               |           |         |         |                     |                     |
	| ssh     | ha-044175 ssh -n                                                                 | ha-044175 | jenkins | v1.33.1 | 05 Aug 24 23:14 UTC | 05 Aug 24 23:14 UTC |
	|         | ha-044175-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-044175 ssh -n ha-044175-m02 sudo cat                                          | ha-044175 | jenkins | v1.33.1 | 05 Aug 24 23:14 UTC | 05 Aug 24 23:14 UTC |
	|         | /home/docker/cp-test_ha-044175-m04_ha-044175-m02.txt                             |           |         |         |                     |                     |
	| cp      | ha-044175 cp ha-044175-m04:/home/docker/cp-test.txt                              | ha-044175 | jenkins | v1.33.1 | 05 Aug 24 23:14 UTC | 05 Aug 24 23:14 UTC |
	|         | ha-044175-m03:/home/docker/cp-test_ha-044175-m04_ha-044175-m03.txt               |           |         |         |                     |                     |
	| ssh     | ha-044175 ssh -n                                                                 | ha-044175 | jenkins | v1.33.1 | 05 Aug 24 23:14 UTC | 05 Aug 24 23:14 UTC |
	|         | ha-044175-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-044175 ssh -n ha-044175-m03 sudo cat                                          | ha-044175 | jenkins | v1.33.1 | 05 Aug 24 23:14 UTC | 05 Aug 24 23:14 UTC |
	|         | /home/docker/cp-test_ha-044175-m04_ha-044175-m03.txt                             |           |         |         |                     |                     |
	| node    | ha-044175 node stop m02 -v=7                                                     | ha-044175 | jenkins | v1.33.1 | 05 Aug 24 23:14 UTC |                     |
	|         | --alsologtostderr                                                                |           |         |         |                     |                     |
	| node    | ha-044175 node start m02 -v=7                                                    | ha-044175 | jenkins | v1.33.1 | 05 Aug 24 23:17 UTC |                     |
	|         | --alsologtostderr                                                                |           |         |         |                     |                     |
	| node    | list -p ha-044175 -v=7                                                           | ha-044175 | jenkins | v1.33.1 | 05 Aug 24 23:18 UTC |                     |
	|         | --alsologtostderr                                                                |           |         |         |                     |                     |
	| stop    | -p ha-044175 -v=7                                                                | ha-044175 | jenkins | v1.33.1 | 05 Aug 24 23:18 UTC |                     |
	|         | --alsologtostderr                                                                |           |         |         |                     |                     |
	| start   | -p ha-044175 --wait=true -v=7                                                    | ha-044175 | jenkins | v1.33.1 | 05 Aug 24 23:20 UTC | 05 Aug 24 23:25 UTC |
	|         | --alsologtostderr                                                                |           |         |         |                     |                     |
	| node    | list -p ha-044175                                                                | ha-044175 | jenkins | v1.33.1 | 05 Aug 24 23:25 UTC |                     |
	| node    | ha-044175 node delete m03 -v=7                                                   | ha-044175 | jenkins | v1.33.1 | 05 Aug 24 23:25 UTC | 05 Aug 24 23:25 UTC |
	|         | --alsologtostderr                                                                |           |         |         |                     |                     |
	| stop    | ha-044175 stop -v=7                                                              | ha-044175 | jenkins | v1.33.1 | 05 Aug 24 23:25 UTC |                     |
	|         | --alsologtostderr                                                                |           |         |         |                     |                     |
	|---------|----------------------------------------------------------------------------------|-----------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/08/05 23:20:11
	Running on machine: ubuntu-20-agent-10
	Binary: Built with gc go1.22.5 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0805 23:20:11.244663   36023 out.go:291] Setting OutFile to fd 1 ...
	I0805 23:20:11.244760   36023 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0805 23:20:11.244768   36023 out.go:304] Setting ErrFile to fd 2...
	I0805 23:20:11.244772   36023 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0805 23:20:11.244977   36023 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19373-9606/.minikube/bin
	I0805 23:20:11.245505   36023 out.go:298] Setting JSON to false
	I0805 23:20:11.246396   36023 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-10","uptime":3757,"bootTime":1722896254,"procs":186,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1065-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0805 23:20:11.246452   36023 start.go:139] virtualization: kvm guest
	I0805 23:20:11.248666   36023 out.go:177] * [ha-044175] minikube v1.33.1 on Ubuntu 20.04 (kvm/amd64)
	I0805 23:20:11.250176   36023 notify.go:220] Checking for updates...
	I0805 23:20:11.250186   36023 out.go:177]   - MINIKUBE_LOCATION=19373
	I0805 23:20:11.251701   36023 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0805 23:20:11.253248   36023 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19373-9606/kubeconfig
	I0805 23:20:11.254509   36023 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19373-9606/.minikube
	I0805 23:20:11.255648   36023 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0805 23:20:11.256694   36023 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0805 23:20:11.258173   36023 config.go:182] Loaded profile config "ha-044175": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0805 23:20:11.258262   36023 driver.go:392] Setting default libvirt URI to qemu:///system
	I0805 23:20:11.258795   36023 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0805 23:20:11.258871   36023 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0805 23:20:11.275140   36023 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46293
	I0805 23:20:11.275509   36023 main.go:141] libmachine: () Calling .GetVersion
	I0805 23:20:11.276277   36023 main.go:141] libmachine: Using API Version  1
	I0805 23:20:11.276304   36023 main.go:141] libmachine: () Calling .SetConfigRaw
	I0805 23:20:11.276594   36023 main.go:141] libmachine: () Calling .GetMachineName
	I0805 23:20:11.276754   36023 main.go:141] libmachine: (ha-044175) Calling .DriverName
	I0805 23:20:11.312308   36023 out.go:177] * Using the kvm2 driver based on existing profile
	I0805 23:20:11.313540   36023 start.go:297] selected driver: kvm2
	I0805 23:20:11.313559   36023 start.go:901] validating driver "kvm2" against &{Name:ha-044175 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19339/minikube-v1.33.1-1722248113-19339-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVer
sion:v1.30.3 ClusterName:ha-044175 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.57 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.112 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m03 IP:192.168.39.201 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m04 IP:192.168.39.228 Port:0 KubernetesVersion:v1.30.3 ContainerRuntime: ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false ef
k:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9
p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0805 23:20:11.313716   36023 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0805 23:20:11.314047   36023 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0805 23:20:11.314117   36023 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/19373-9606/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0805 23:20:11.328722   36023 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.33.1
	I0805 23:20:11.329453   36023 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0805 23:20:11.329481   36023 cni.go:84] Creating CNI manager for ""
	I0805 23:20:11.329488   36023 cni.go:136] multinode detected (4 nodes found), recommending kindnet
	I0805 23:20:11.329551   36023 start.go:340] cluster config:
	{Name:ha-044175 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19339/minikube-v1.33.1-1722248113-19339-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.3 ClusterName:ha-044175 Namespace:default APIServerHAVIP:192.168.39
.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.57 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.112 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m03 IP:192.168.39.201 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m04 IP:192.168.39.228 Port:0 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-til
ler:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPo
rt:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0805 23:20:11.329722   36023 iso.go:125] acquiring lock: {Name:mk54a637ed625e04bb2b6adf973b61c976cd6d35 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0805 23:20:11.331514   36023 out.go:177] * Starting "ha-044175" primary control-plane node in "ha-044175" cluster
	I0805 23:20:11.332803   36023 preload.go:131] Checking if preload exists for k8s version v1.30.3 and runtime crio
	I0805 23:20:11.332846   36023 preload.go:146] Found local preload: /home/jenkins/minikube-integration/19373-9606/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-cri-o-overlay-amd64.tar.lz4
	I0805 23:20:11.332858   36023 cache.go:56] Caching tarball of preloaded images
	I0805 23:20:11.332935   36023 preload.go:172] Found /home/jenkins/minikube-integration/19373-9606/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0805 23:20:11.332949   36023 cache.go:59] Finished verifying existence of preloaded tar for v1.30.3 on crio
	I0805 23:20:11.333069   36023 profile.go:143] Saving config to /home/jenkins/minikube-integration/19373-9606/.minikube/profiles/ha-044175/config.json ...
	I0805 23:20:11.333250   36023 start.go:360] acquireMachinesLock for ha-044175: {Name:mkd2ba511c39504598222edbf83078b718329186 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0805 23:20:11.333299   36023 start.go:364] duration metric: took 31.481µs to acquireMachinesLock for "ha-044175"
	I0805 23:20:11.333318   36023 start.go:96] Skipping create...Using existing machine configuration
	I0805 23:20:11.333327   36023 fix.go:54] fixHost starting: 
	I0805 23:20:11.333571   36023 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0805 23:20:11.333607   36023 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0805 23:20:11.347610   36023 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:32929
	I0805 23:20:11.348065   36023 main.go:141] libmachine: () Calling .GetVersion
	I0805 23:20:11.348630   36023 main.go:141] libmachine: Using API Version  1
	I0805 23:20:11.348668   36023 main.go:141] libmachine: () Calling .SetConfigRaw
	I0805 23:20:11.348959   36023 main.go:141] libmachine: () Calling .GetMachineName
	I0805 23:20:11.349162   36023 main.go:141] libmachine: (ha-044175) Calling .DriverName
	I0805 23:20:11.349310   36023 main.go:141] libmachine: (ha-044175) Calling .GetState
	I0805 23:20:11.351111   36023 fix.go:112] recreateIfNeeded on ha-044175: state=Running err=<nil>
	W0805 23:20:11.351133   36023 fix.go:138] unexpected machine state, will restart: <nil>
	I0805 23:20:11.353615   36023 out.go:177] * Updating the running kvm2 "ha-044175" VM ...
	I0805 23:20:11.355215   36023 machine.go:94] provisionDockerMachine start ...
	I0805 23:20:11.355243   36023 main.go:141] libmachine: (ha-044175) Calling .DriverName
	I0805 23:20:11.355532   36023 main.go:141] libmachine: (ha-044175) Calling .GetSSHHostname
	I0805 23:20:11.358123   36023 main.go:141] libmachine: (ha-044175) DBG | domain ha-044175 has defined MAC address 52:54:00:d0:5f:e4 in network mk-ha-044175
	I0805 23:20:11.358605   36023 main.go:141] libmachine: (ha-044175) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d0:5f:e4", ip: ""} in network mk-ha-044175: {Iface:virbr1 ExpiryTime:2024-08-06 00:10:15 +0000 UTC Type:0 Mac:52:54:00:d0:5f:e4 Iaid: IPaddr:192.168.39.57 Prefix:24 Hostname:ha-044175 Clientid:01:52:54:00:d0:5f:e4}
	I0805 23:20:11.358644   36023 main.go:141] libmachine: (ha-044175) DBG | domain ha-044175 has defined IP address 192.168.39.57 and MAC address 52:54:00:d0:5f:e4 in network mk-ha-044175
	I0805 23:20:11.358761   36023 main.go:141] libmachine: (ha-044175) Calling .GetSSHPort
	I0805 23:20:11.358938   36023 main.go:141] libmachine: (ha-044175) Calling .GetSSHKeyPath
	I0805 23:20:11.359099   36023 main.go:141] libmachine: (ha-044175) Calling .GetSSHKeyPath
	I0805 23:20:11.359250   36023 main.go:141] libmachine: (ha-044175) Calling .GetSSHUsername
	I0805 23:20:11.359406   36023 main.go:141] libmachine: Using SSH client type: native
	I0805 23:20:11.359582   36023 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.57 22 <nil> <nil>}
	I0805 23:20:11.359592   36023 main.go:141] libmachine: About to run SSH command:
	hostname
	I0805 23:20:11.464563   36023 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-044175
	
	I0805 23:20:11.464593   36023 main.go:141] libmachine: (ha-044175) Calling .GetMachineName
	I0805 23:20:11.464871   36023 buildroot.go:166] provisioning hostname "ha-044175"
	I0805 23:20:11.464902   36023 main.go:141] libmachine: (ha-044175) Calling .GetMachineName
	I0805 23:20:11.465139   36023 main.go:141] libmachine: (ha-044175) Calling .GetSSHHostname
	I0805 23:20:11.467742   36023 main.go:141] libmachine: (ha-044175) DBG | domain ha-044175 has defined MAC address 52:54:00:d0:5f:e4 in network mk-ha-044175
	I0805 23:20:11.468117   36023 main.go:141] libmachine: (ha-044175) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d0:5f:e4", ip: ""} in network mk-ha-044175: {Iface:virbr1 ExpiryTime:2024-08-06 00:10:15 +0000 UTC Type:0 Mac:52:54:00:d0:5f:e4 Iaid: IPaddr:192.168.39.57 Prefix:24 Hostname:ha-044175 Clientid:01:52:54:00:d0:5f:e4}
	I0805 23:20:11.468141   36023 main.go:141] libmachine: (ha-044175) DBG | domain ha-044175 has defined IP address 192.168.39.57 and MAC address 52:54:00:d0:5f:e4 in network mk-ha-044175
	I0805 23:20:11.468296   36023 main.go:141] libmachine: (ha-044175) Calling .GetSSHPort
	I0805 23:20:11.468477   36023 main.go:141] libmachine: (ha-044175) Calling .GetSSHKeyPath
	I0805 23:20:11.468635   36023 main.go:141] libmachine: (ha-044175) Calling .GetSSHKeyPath
	I0805 23:20:11.468759   36023 main.go:141] libmachine: (ha-044175) Calling .GetSSHUsername
	I0805 23:20:11.468928   36023 main.go:141] libmachine: Using SSH client type: native
	I0805 23:20:11.469084   36023 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.57 22 <nil> <nil>}
	I0805 23:20:11.469094   36023 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-044175 && echo "ha-044175" | sudo tee /etc/hostname
	I0805 23:20:11.584014   36023 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-044175
	
	I0805 23:20:11.584043   36023 main.go:141] libmachine: (ha-044175) Calling .GetSSHHostname
	I0805 23:20:11.587100   36023 main.go:141] libmachine: (ha-044175) DBG | domain ha-044175 has defined MAC address 52:54:00:d0:5f:e4 in network mk-ha-044175
	I0805 23:20:11.587536   36023 main.go:141] libmachine: (ha-044175) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d0:5f:e4", ip: ""} in network mk-ha-044175: {Iface:virbr1 ExpiryTime:2024-08-06 00:10:15 +0000 UTC Type:0 Mac:52:54:00:d0:5f:e4 Iaid: IPaddr:192.168.39.57 Prefix:24 Hostname:ha-044175 Clientid:01:52:54:00:d0:5f:e4}
	I0805 23:20:11.587563   36023 main.go:141] libmachine: (ha-044175) DBG | domain ha-044175 has defined IP address 192.168.39.57 and MAC address 52:54:00:d0:5f:e4 in network mk-ha-044175
	I0805 23:20:11.587758   36023 main.go:141] libmachine: (ha-044175) Calling .GetSSHPort
	I0805 23:20:11.587930   36023 main.go:141] libmachine: (ha-044175) Calling .GetSSHKeyPath
	I0805 23:20:11.588098   36023 main.go:141] libmachine: (ha-044175) Calling .GetSSHKeyPath
	I0805 23:20:11.588219   36023 main.go:141] libmachine: (ha-044175) Calling .GetSSHUsername
	I0805 23:20:11.588360   36023 main.go:141] libmachine: Using SSH client type: native
	I0805 23:20:11.588509   36023 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.57 22 <nil> <nil>}
	I0805 23:20:11.588523   36023 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-044175' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-044175/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-044175' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0805 23:20:11.692374   36023 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0805 23:20:11.692411   36023 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19373-9606/.minikube CaCertPath:/home/jenkins/minikube-integration/19373-9606/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19373-9606/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19373-9606/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19373-9606/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19373-9606/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19373-9606/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19373-9606/.minikube}
	I0805 23:20:11.692458   36023 buildroot.go:174] setting up certificates
	I0805 23:20:11.692474   36023 provision.go:84] configureAuth start
	I0805 23:20:11.692492   36023 main.go:141] libmachine: (ha-044175) Calling .GetMachineName
	I0805 23:20:11.692736   36023 main.go:141] libmachine: (ha-044175) Calling .GetIP
	I0805 23:20:11.695532   36023 main.go:141] libmachine: (ha-044175) DBG | domain ha-044175 has defined MAC address 52:54:00:d0:5f:e4 in network mk-ha-044175
	I0805 23:20:11.695910   36023 main.go:141] libmachine: (ha-044175) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d0:5f:e4", ip: ""} in network mk-ha-044175: {Iface:virbr1 ExpiryTime:2024-08-06 00:10:15 +0000 UTC Type:0 Mac:52:54:00:d0:5f:e4 Iaid: IPaddr:192.168.39.57 Prefix:24 Hostname:ha-044175 Clientid:01:52:54:00:d0:5f:e4}
	I0805 23:20:11.695943   36023 main.go:141] libmachine: (ha-044175) DBG | domain ha-044175 has defined IP address 192.168.39.57 and MAC address 52:54:00:d0:5f:e4 in network mk-ha-044175
	I0805 23:20:11.696149   36023 main.go:141] libmachine: (ha-044175) Calling .GetSSHHostname
	I0805 23:20:11.698258   36023 main.go:141] libmachine: (ha-044175) DBG | domain ha-044175 has defined MAC address 52:54:00:d0:5f:e4 in network mk-ha-044175
	I0805 23:20:11.698677   36023 main.go:141] libmachine: (ha-044175) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d0:5f:e4", ip: ""} in network mk-ha-044175: {Iface:virbr1 ExpiryTime:2024-08-06 00:10:15 +0000 UTC Type:0 Mac:52:54:00:d0:5f:e4 Iaid: IPaddr:192.168.39.57 Prefix:24 Hostname:ha-044175 Clientid:01:52:54:00:d0:5f:e4}
	I0805 23:20:11.698701   36023 main.go:141] libmachine: (ha-044175) DBG | domain ha-044175 has defined IP address 192.168.39.57 and MAC address 52:54:00:d0:5f:e4 in network mk-ha-044175
	I0805 23:20:11.698744   36023 provision.go:143] copyHostCerts
	I0805 23:20:11.698772   36023 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19373-9606/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/19373-9606/.minikube/cert.pem
	I0805 23:20:11.698808   36023 exec_runner.go:144] found /home/jenkins/minikube-integration/19373-9606/.minikube/cert.pem, removing ...
	I0805 23:20:11.698818   36023 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19373-9606/.minikube/cert.pem
	I0805 23:20:11.698904   36023 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19373-9606/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19373-9606/.minikube/cert.pem (1123 bytes)
	I0805 23:20:11.699000   36023 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19373-9606/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/19373-9606/.minikube/key.pem
	I0805 23:20:11.699035   36023 exec_runner.go:144] found /home/jenkins/minikube-integration/19373-9606/.minikube/key.pem, removing ...
	I0805 23:20:11.699041   36023 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19373-9606/.minikube/key.pem
	I0805 23:20:11.699089   36023 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19373-9606/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19373-9606/.minikube/key.pem (1679 bytes)
	I0805 23:20:11.699151   36023 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19373-9606/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/19373-9606/.minikube/ca.pem
	I0805 23:20:11.699172   36023 exec_runner.go:144] found /home/jenkins/minikube-integration/19373-9606/.minikube/ca.pem, removing ...
	I0805 23:20:11.699181   36023 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19373-9606/.minikube/ca.pem
	I0805 23:20:11.699215   36023 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19373-9606/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19373-9606/.minikube/ca.pem (1082 bytes)
	I0805 23:20:11.699277   36023 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19373-9606/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19373-9606/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19373-9606/.minikube/certs/ca-key.pem org=jenkins.ha-044175 san=[127.0.0.1 192.168.39.57 ha-044175 localhost minikube]
	I0805 23:20:11.801111   36023 provision.go:177] copyRemoteCerts
	I0805 23:20:11.801163   36023 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0805 23:20:11.801182   36023 main.go:141] libmachine: (ha-044175) Calling .GetSSHHostname
	I0805 23:20:11.804141   36023 main.go:141] libmachine: (ha-044175) DBG | domain ha-044175 has defined MAC address 52:54:00:d0:5f:e4 in network mk-ha-044175
	I0805 23:20:11.804513   36023 main.go:141] libmachine: (ha-044175) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d0:5f:e4", ip: ""} in network mk-ha-044175: {Iface:virbr1 ExpiryTime:2024-08-06 00:10:15 +0000 UTC Type:0 Mac:52:54:00:d0:5f:e4 Iaid: IPaddr:192.168.39.57 Prefix:24 Hostname:ha-044175 Clientid:01:52:54:00:d0:5f:e4}
	I0805 23:20:11.804541   36023 main.go:141] libmachine: (ha-044175) DBG | domain ha-044175 has defined IP address 192.168.39.57 and MAC address 52:54:00:d0:5f:e4 in network mk-ha-044175
	I0805 23:20:11.804763   36023 main.go:141] libmachine: (ha-044175) Calling .GetSSHPort
	I0805 23:20:11.804985   36023 main.go:141] libmachine: (ha-044175) Calling .GetSSHKeyPath
	I0805 23:20:11.805221   36023 main.go:141] libmachine: (ha-044175) Calling .GetSSHUsername
	I0805 23:20:11.805391   36023 sshutil.go:53] new ssh client: &{IP:192.168.39.57 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19373-9606/.minikube/machines/ha-044175/id_rsa Username:docker}
	I0805 23:20:11.886941   36023 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19373-9606/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0805 23:20:11.887017   36023 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19373-9606/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0805 23:20:11.915345   36023 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19373-9606/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0805 23:20:11.915417   36023 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19373-9606/.minikube/machines/server.pem --> /etc/docker/server.pem (1200 bytes)
	I0805 23:20:11.940652   36023 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19373-9606/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0805 23:20:11.940719   36023 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19373-9606/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0805 23:20:11.967966   36023 provision.go:87] duration metric: took 275.477847ms to configureAuth
	I0805 23:20:11.967992   36023 buildroot.go:189] setting minikube options for container-runtime
	I0805 23:20:11.968200   36023 config.go:182] Loaded profile config "ha-044175": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0805 23:20:11.968270   36023 main.go:141] libmachine: (ha-044175) Calling .GetSSHHostname
	I0805 23:20:11.970923   36023 main.go:141] libmachine: (ha-044175) DBG | domain ha-044175 has defined MAC address 52:54:00:d0:5f:e4 in network mk-ha-044175
	I0805 23:20:11.971301   36023 main.go:141] libmachine: (ha-044175) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d0:5f:e4", ip: ""} in network mk-ha-044175: {Iface:virbr1 ExpiryTime:2024-08-06 00:10:15 +0000 UTC Type:0 Mac:52:54:00:d0:5f:e4 Iaid: IPaddr:192.168.39.57 Prefix:24 Hostname:ha-044175 Clientid:01:52:54:00:d0:5f:e4}
	I0805 23:20:11.971325   36023 main.go:141] libmachine: (ha-044175) DBG | domain ha-044175 has defined IP address 192.168.39.57 and MAC address 52:54:00:d0:5f:e4 in network mk-ha-044175
	I0805 23:20:11.971493   36023 main.go:141] libmachine: (ha-044175) Calling .GetSSHPort
	I0805 23:20:11.971704   36023 main.go:141] libmachine: (ha-044175) Calling .GetSSHKeyPath
	I0805 23:20:11.971888   36023 main.go:141] libmachine: (ha-044175) Calling .GetSSHKeyPath
	I0805 23:20:11.972062   36023 main.go:141] libmachine: (ha-044175) Calling .GetSSHUsername
	I0805 23:20:11.972238   36023 main.go:141] libmachine: Using SSH client type: native
	I0805 23:20:11.972414   36023 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.57 22 <nil> <nil>}
	I0805 23:20:11.972433   36023 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0805 23:21:42.751850   36023 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0805 23:21:42.751890   36023 machine.go:97] duration metric: took 1m31.396656241s to provisionDockerMachine
	I0805 23:21:42.751905   36023 start.go:293] postStartSetup for "ha-044175" (driver="kvm2")
	I0805 23:21:42.751921   36023 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0805 23:21:42.751938   36023 main.go:141] libmachine: (ha-044175) Calling .DriverName
	I0805 23:21:42.752288   36023 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0805 23:21:42.752314   36023 main.go:141] libmachine: (ha-044175) Calling .GetSSHHostname
	I0805 23:21:42.755819   36023 main.go:141] libmachine: (ha-044175) DBG | domain ha-044175 has defined MAC address 52:54:00:d0:5f:e4 in network mk-ha-044175
	I0805 23:21:42.756358   36023 main.go:141] libmachine: (ha-044175) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d0:5f:e4", ip: ""} in network mk-ha-044175: {Iface:virbr1 ExpiryTime:2024-08-06 00:10:15 +0000 UTC Type:0 Mac:52:54:00:d0:5f:e4 Iaid: IPaddr:192.168.39.57 Prefix:24 Hostname:ha-044175 Clientid:01:52:54:00:d0:5f:e4}
	I0805 23:21:42.756389   36023 main.go:141] libmachine: (ha-044175) DBG | domain ha-044175 has defined IP address 192.168.39.57 and MAC address 52:54:00:d0:5f:e4 in network mk-ha-044175
	I0805 23:21:42.756526   36023 main.go:141] libmachine: (ha-044175) Calling .GetSSHPort
	I0805 23:21:42.756719   36023 main.go:141] libmachine: (ha-044175) Calling .GetSSHKeyPath
	I0805 23:21:42.756882   36023 main.go:141] libmachine: (ha-044175) Calling .GetSSHUsername
	I0805 23:21:42.757010   36023 sshutil.go:53] new ssh client: &{IP:192.168.39.57 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19373-9606/.minikube/machines/ha-044175/id_rsa Username:docker}
	I0805 23:21:42.840368   36023 ssh_runner.go:195] Run: cat /etc/os-release
	I0805 23:21:42.844976   36023 info.go:137] Remote host: Buildroot 2023.02.9
	I0805 23:21:42.845001   36023 filesync.go:126] Scanning /home/jenkins/minikube-integration/19373-9606/.minikube/addons for local assets ...
	I0805 23:21:42.845061   36023 filesync.go:126] Scanning /home/jenkins/minikube-integration/19373-9606/.minikube/files for local assets ...
	I0805 23:21:42.845164   36023 filesync.go:149] local asset: /home/jenkins/minikube-integration/19373-9606/.minikube/files/etc/ssl/certs/167922.pem -> 167922.pem in /etc/ssl/certs
	I0805 23:21:42.845176   36023 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19373-9606/.minikube/files/etc/ssl/certs/167922.pem -> /etc/ssl/certs/167922.pem
	I0805 23:21:42.845264   36023 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0805 23:21:42.855994   36023 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19373-9606/.minikube/files/etc/ssl/certs/167922.pem --> /etc/ssl/certs/167922.pem (1708 bytes)
	I0805 23:21:42.881752   36023 start.go:296] duration metric: took 129.831599ms for postStartSetup
	I0805 23:21:42.881823   36023 main.go:141] libmachine: (ha-044175) Calling .DriverName
	I0805 23:21:42.882108   36023 ssh_runner.go:195] Run: sudo ls --almost-all -1 /var/lib/minikube/backup
	I0805 23:21:42.882132   36023 main.go:141] libmachine: (ha-044175) Calling .GetSSHHostname
	I0805 23:21:42.884783   36023 main.go:141] libmachine: (ha-044175) DBG | domain ha-044175 has defined MAC address 52:54:00:d0:5f:e4 in network mk-ha-044175
	I0805 23:21:42.885247   36023 main.go:141] libmachine: (ha-044175) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d0:5f:e4", ip: ""} in network mk-ha-044175: {Iface:virbr1 ExpiryTime:2024-08-06 00:10:15 +0000 UTC Type:0 Mac:52:54:00:d0:5f:e4 Iaid: IPaddr:192.168.39.57 Prefix:24 Hostname:ha-044175 Clientid:01:52:54:00:d0:5f:e4}
	I0805 23:21:42.885275   36023 main.go:141] libmachine: (ha-044175) DBG | domain ha-044175 has defined IP address 192.168.39.57 and MAC address 52:54:00:d0:5f:e4 in network mk-ha-044175
	I0805 23:21:42.885398   36023 main.go:141] libmachine: (ha-044175) Calling .GetSSHPort
	I0805 23:21:42.885579   36023 main.go:141] libmachine: (ha-044175) Calling .GetSSHKeyPath
	I0805 23:21:42.885846   36023 main.go:141] libmachine: (ha-044175) Calling .GetSSHUsername
	I0805 23:21:42.885995   36023 sshutil.go:53] new ssh client: &{IP:192.168.39.57 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19373-9606/.minikube/machines/ha-044175/id_rsa Username:docker}
	W0805 23:21:42.966113   36023 fix.go:99] cannot read backup folder, skipping restore: read dir: sudo ls --almost-all -1 /var/lib/minikube/backup: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/backup': No such file or directory
	I0805 23:21:42.966136   36023 fix.go:56] duration metric: took 1m31.632810326s for fixHost
	I0805 23:21:42.966156   36023 main.go:141] libmachine: (ha-044175) Calling .GetSSHHostname
	I0805 23:21:42.968838   36023 main.go:141] libmachine: (ha-044175) DBG | domain ha-044175 has defined MAC address 52:54:00:d0:5f:e4 in network mk-ha-044175
	I0805 23:21:42.969290   36023 main.go:141] libmachine: (ha-044175) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d0:5f:e4", ip: ""} in network mk-ha-044175: {Iface:virbr1 ExpiryTime:2024-08-06 00:10:15 +0000 UTC Type:0 Mac:52:54:00:d0:5f:e4 Iaid: IPaddr:192.168.39.57 Prefix:24 Hostname:ha-044175 Clientid:01:52:54:00:d0:5f:e4}
	I0805 23:21:42.969319   36023 main.go:141] libmachine: (ha-044175) DBG | domain ha-044175 has defined IP address 192.168.39.57 and MAC address 52:54:00:d0:5f:e4 in network mk-ha-044175
	I0805 23:21:42.969493   36023 main.go:141] libmachine: (ha-044175) Calling .GetSSHPort
	I0805 23:21:42.969680   36023 main.go:141] libmachine: (ha-044175) Calling .GetSSHKeyPath
	I0805 23:21:42.969859   36023 main.go:141] libmachine: (ha-044175) Calling .GetSSHKeyPath
	I0805 23:21:42.969991   36023 main.go:141] libmachine: (ha-044175) Calling .GetSSHUsername
	I0805 23:21:42.970167   36023 main.go:141] libmachine: Using SSH client type: native
	I0805 23:21:42.970323   36023 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.57 22 <nil> <nil>}
	I0805 23:21:42.970332   36023 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0805 23:21:43.068012   36023 main.go:141] libmachine: SSH cmd err, output: <nil>: 1722900103.033249156
	
	I0805 23:21:43.068044   36023 fix.go:216] guest clock: 1722900103.033249156
	I0805 23:21:43.068056   36023 fix.go:229] Guest: 2024-08-05 23:21:43.033249156 +0000 UTC Remote: 2024-08-05 23:21:42.966143145 +0000 UTC m=+91.756743346 (delta=67.106011ms)
	I0805 23:21:43.068084   36023 fix.go:200] guest clock delta is within tolerance: 67.106011ms
	I0805 23:21:43.068093   36023 start.go:83] releasing machines lock for "ha-044175", held for 1m31.734781646s
	I0805 23:21:43.068118   36023 main.go:141] libmachine: (ha-044175) Calling .DriverName
	I0805 23:21:43.068390   36023 main.go:141] libmachine: (ha-044175) Calling .GetIP
	I0805 23:21:43.071393   36023 main.go:141] libmachine: (ha-044175) DBG | domain ha-044175 has defined MAC address 52:54:00:d0:5f:e4 in network mk-ha-044175
	I0805 23:21:43.071734   36023 main.go:141] libmachine: (ha-044175) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d0:5f:e4", ip: ""} in network mk-ha-044175: {Iface:virbr1 ExpiryTime:2024-08-06 00:10:15 +0000 UTC Type:0 Mac:52:54:00:d0:5f:e4 Iaid: IPaddr:192.168.39.57 Prefix:24 Hostname:ha-044175 Clientid:01:52:54:00:d0:5f:e4}
	I0805 23:21:43.071774   36023 main.go:141] libmachine: (ha-044175) DBG | domain ha-044175 has defined IP address 192.168.39.57 and MAC address 52:54:00:d0:5f:e4 in network mk-ha-044175
	I0805 23:21:43.071925   36023 main.go:141] libmachine: (ha-044175) Calling .DriverName
	I0805 23:21:43.072483   36023 main.go:141] libmachine: (ha-044175) Calling .DriverName
	I0805 23:21:43.072633   36023 main.go:141] libmachine: (ha-044175) Calling .DriverName
	I0805 23:21:43.072729   36023 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0805 23:21:43.072767   36023 main.go:141] libmachine: (ha-044175) Calling .GetSSHHostname
	I0805 23:21:43.072870   36023 ssh_runner.go:195] Run: cat /version.json
	I0805 23:21:43.072891   36023 main.go:141] libmachine: (ha-044175) Calling .GetSSHHostname
	I0805 23:21:43.075495   36023 main.go:141] libmachine: (ha-044175) DBG | domain ha-044175 has defined MAC address 52:54:00:d0:5f:e4 in network mk-ha-044175
	I0805 23:21:43.075569   36023 main.go:141] libmachine: (ha-044175) DBG | domain ha-044175 has defined MAC address 52:54:00:d0:5f:e4 in network mk-ha-044175
	I0805 23:21:43.075855   36023 main.go:141] libmachine: (ha-044175) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d0:5f:e4", ip: ""} in network mk-ha-044175: {Iface:virbr1 ExpiryTime:2024-08-06 00:10:15 +0000 UTC Type:0 Mac:52:54:00:d0:5f:e4 Iaid: IPaddr:192.168.39.57 Prefix:24 Hostname:ha-044175 Clientid:01:52:54:00:d0:5f:e4}
	I0805 23:21:43.075881   36023 main.go:141] libmachine: (ha-044175) DBG | domain ha-044175 has defined IP address 192.168.39.57 and MAC address 52:54:00:d0:5f:e4 in network mk-ha-044175
	I0805 23:21:43.075904   36023 main.go:141] libmachine: (ha-044175) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d0:5f:e4", ip: ""} in network mk-ha-044175: {Iface:virbr1 ExpiryTime:2024-08-06 00:10:15 +0000 UTC Type:0 Mac:52:54:00:d0:5f:e4 Iaid: IPaddr:192.168.39.57 Prefix:24 Hostname:ha-044175 Clientid:01:52:54:00:d0:5f:e4}
	I0805 23:21:43.075919   36023 main.go:141] libmachine: (ha-044175) DBG | domain ha-044175 has defined IP address 192.168.39.57 and MAC address 52:54:00:d0:5f:e4 in network mk-ha-044175
	I0805 23:21:43.076054   36023 main.go:141] libmachine: (ha-044175) Calling .GetSSHPort
	I0805 23:21:43.076168   36023 main.go:141] libmachine: (ha-044175) Calling .GetSSHPort
	I0805 23:21:43.076243   36023 main.go:141] libmachine: (ha-044175) Calling .GetSSHKeyPath
	I0805 23:21:43.076298   36023 main.go:141] libmachine: (ha-044175) Calling .GetSSHKeyPath
	I0805 23:21:43.076347   36023 main.go:141] libmachine: (ha-044175) Calling .GetSSHUsername
	I0805 23:21:43.076467   36023 sshutil.go:53] new ssh client: &{IP:192.168.39.57 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19373-9606/.minikube/machines/ha-044175/id_rsa Username:docker}
	I0805 23:21:43.076483   36023 main.go:141] libmachine: (ha-044175) Calling .GetSSHUsername
	I0805 23:21:43.076656   36023 sshutil.go:53] new ssh client: &{IP:192.168.39.57 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19373-9606/.minikube/machines/ha-044175/id_rsa Username:docker}
	I0805 23:21:43.152577   36023 ssh_runner.go:195] Run: systemctl --version
	I0805 23:21:43.176092   36023 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0805 23:21:43.426720   36023 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0805 23:21:43.436098   36023 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0805 23:21:43.436192   36023 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0805 23:21:43.451465   36023 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I0805 23:21:43.451490   36023 start.go:495] detecting cgroup driver to use...
	I0805 23:21:43.451550   36023 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0805 23:21:43.478451   36023 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0805 23:21:43.497720   36023 docker.go:217] disabling cri-docker service (if available) ...
	I0805 23:21:43.497777   36023 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0805 23:21:43.525877   36023 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0805 23:21:43.542713   36023 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0805 23:21:43.708709   36023 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0805 23:21:43.855703   36023 docker.go:233] disabling docker service ...
	I0805 23:21:43.855783   36023 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0805 23:21:43.873752   36023 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0805 23:21:43.887975   36023 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0805 23:21:44.046539   36023 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0805 23:21:44.196442   36023 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0805 23:21:44.210803   36023 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0805 23:21:44.239367   36023 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I0805 23:21:44.239419   36023 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0805 23:21:44.250894   36023 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0805 23:21:44.250973   36023 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0805 23:21:44.261409   36023 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0805 23:21:44.271788   36023 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0805 23:21:44.282752   36023 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0805 23:21:44.293358   36023 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0805 23:21:44.303547   36023 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0805 23:21:44.314761   36023 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0805 23:21:44.324707   36023 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0805 23:21:44.333905   36023 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0805 23:21:44.343110   36023 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0805 23:21:44.489687   36023 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0805 23:21:53.882601   36023 ssh_runner.go:235] Completed: sudo systemctl restart crio: (9.392872217s)
	I0805 23:21:53.882631   36023 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0805 23:21:53.882683   36023 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0805 23:21:53.888039   36023 start.go:563] Will wait 60s for crictl version
	I0805 23:21:53.888102   36023 ssh_runner.go:195] Run: which crictl
	I0805 23:21:53.892001   36023 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0805 23:21:53.933976   36023 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0805 23:21:53.934044   36023 ssh_runner.go:195] Run: crio --version
	I0805 23:21:53.964361   36023 ssh_runner.go:195] Run: crio --version
	I0805 23:21:53.995549   36023 out.go:177] * Preparing Kubernetes v1.30.3 on CRI-O 1.29.1 ...
	I0805 23:21:53.997011   36023 main.go:141] libmachine: (ha-044175) Calling .GetIP
	I0805 23:21:53.999763   36023 main.go:141] libmachine: (ha-044175) DBG | domain ha-044175 has defined MAC address 52:54:00:d0:5f:e4 in network mk-ha-044175
	I0805 23:21:54.000177   36023 main.go:141] libmachine: (ha-044175) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d0:5f:e4", ip: ""} in network mk-ha-044175: {Iface:virbr1 ExpiryTime:2024-08-06 00:10:15 +0000 UTC Type:0 Mac:52:54:00:d0:5f:e4 Iaid: IPaddr:192.168.39.57 Prefix:24 Hostname:ha-044175 Clientid:01:52:54:00:d0:5f:e4}
	I0805 23:21:54.000197   36023 main.go:141] libmachine: (ha-044175) DBG | domain ha-044175 has defined IP address 192.168.39.57 and MAC address 52:54:00:d0:5f:e4 in network mk-ha-044175
	I0805 23:21:54.000365   36023 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0805 23:21:54.005168   36023 kubeadm.go:883] updating cluster {Name:ha-044175 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19339/minikube-v1.33.1-1722248113-19339-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.3 Cl
usterName:ha-044175 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.57 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.112 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m03 IP:192.168.39.201 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m04 IP:192.168.39.228 Port:0 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false fre
shpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L Mou
ntGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0805 23:21:54.005291   36023 preload.go:131] Checking if preload exists for k8s version v1.30.3 and runtime crio
	I0805 23:21:54.005344   36023 ssh_runner.go:195] Run: sudo crictl images --output json
	I0805 23:21:54.051772   36023 crio.go:514] all images are preloaded for cri-o runtime.
	I0805 23:21:54.051790   36023 crio.go:433] Images already preloaded, skipping extraction
	I0805 23:21:54.051849   36023 ssh_runner.go:195] Run: sudo crictl images --output json
	I0805 23:21:54.086832   36023 crio.go:514] all images are preloaded for cri-o runtime.
	I0805 23:21:54.086857   36023 cache_images.go:84] Images are preloaded, skipping loading
	I0805 23:21:54.086868   36023 kubeadm.go:934] updating node { 192.168.39.57 8443 v1.30.3 crio true true} ...
	I0805 23:21:54.086983   36023 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.30.3/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=ha-044175 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.57
	
	[Install]
	 config:
	{KubernetesVersion:v1.30.3 ClusterName:ha-044175 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0805 23:21:54.087101   36023 ssh_runner.go:195] Run: crio config
	I0805 23:21:54.137581   36023 cni.go:84] Creating CNI manager for ""
	I0805 23:21:54.137603   36023 cni.go:136] multinode detected (4 nodes found), recommending kindnet
	I0805 23:21:54.137615   36023 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0805 23:21:54.137639   36023 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.57 APIServerPort:8443 KubernetesVersion:v1.30.3 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:ha-044175 NodeName:ha-044175 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.57"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.57 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernetes/m
anifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0805 23:21:54.137779   36023 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.57
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "ha-044175"
	  kubeletExtraArgs:
	    node-ip: 192.168.39.57
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.57"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.30.3
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0805 23:21:54.137804   36023 kube-vip.go:115] generating kube-vip config ...
	I0805 23:21:54.137857   36023 ssh_runner.go:195] Run: sudo sh -c "modprobe --all ip_vs ip_vs_rr ip_vs_wrr ip_vs_sh nf_conntrack"
	I0805 23:21:54.149472   36023 kube-vip.go:167] auto-enabling control-plane load-balancing in kube-vip
	I0805 23:21:54.149596   36023 kube-vip.go:137] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_nodename
	      valueFrom:
	        fieldRef:
	          fieldPath: spec.nodeName
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 192.168.39.254
	    - name: prometheus_server
	      value: :2112
	    - name : lb_enable
	      value: "true"
	    - name: lb_port
	      value: "8443"
	    image: ghcr.io/kube-vip/kube-vip:v0.8.0
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/admin.conf"
	    name: kubeconfig
	status: {}
	I0805 23:21:54.149650   36023 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.30.3
	I0805 23:21:54.160037   36023 binaries.go:44] Found k8s binaries, skipping transfer
	I0805 23:21:54.160090   36023 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube /etc/kubernetes/manifests
	I0805 23:21:54.169496   36023 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (308 bytes)
	I0805 23:21:54.187044   36023 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0805 23:21:54.205158   36023 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2150 bytes)
	I0805 23:21:54.222292   36023 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1441 bytes)
	I0805 23:21:54.240454   36023 ssh_runner.go:195] Run: grep 192.168.39.254	control-plane.minikube.internal$ /etc/hosts
	I0805 23:21:54.245443   36023 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0805 23:21:54.392594   36023 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0805 23:21:54.407717   36023 certs.go:68] Setting up /home/jenkins/minikube-integration/19373-9606/.minikube/profiles/ha-044175 for IP: 192.168.39.57
	I0805 23:21:54.407738   36023 certs.go:194] generating shared ca certs ...
	I0805 23:21:54.407753   36023 certs.go:226] acquiring lock for ca certs: {Name:mkf35a042c1656d191f542eee7fa087aad4d29d8 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0805 23:21:54.407879   36023 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19373-9606/.minikube/ca.key
	I0805 23:21:54.407930   36023 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19373-9606/.minikube/proxy-client-ca.key
	I0805 23:21:54.407937   36023 certs.go:256] generating profile certs ...
	I0805 23:21:54.408001   36023 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19373-9606/.minikube/profiles/ha-044175/client.key
	I0805 23:21:54.408027   36023 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/19373-9606/.minikube/profiles/ha-044175/apiserver.key.cb584d1b
	I0805 23:21:54.408040   36023 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19373-9606/.minikube/profiles/ha-044175/apiserver.crt.cb584d1b with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.39.57 192.168.39.112 192.168.39.201 192.168.39.254]
	I0805 23:21:54.763069   36023 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19373-9606/.minikube/profiles/ha-044175/apiserver.crt.cb584d1b ...
	I0805 23:21:54.763103   36023 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19373-9606/.minikube/profiles/ha-044175/apiserver.crt.cb584d1b: {Name:mk1a963e63c48b245bb8cae0d4c77d2e6a272041 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0805 23:21:54.763266   36023 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19373-9606/.minikube/profiles/ha-044175/apiserver.key.cb584d1b ...
	I0805 23:21:54.763280   36023 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19373-9606/.minikube/profiles/ha-044175/apiserver.key.cb584d1b: {Name:mkb0217b66b1058ef522d13f78348c47d2230a95 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0805 23:21:54.763344   36023 certs.go:381] copying /home/jenkins/minikube-integration/19373-9606/.minikube/profiles/ha-044175/apiserver.crt.cb584d1b -> /home/jenkins/minikube-integration/19373-9606/.minikube/profiles/ha-044175/apiserver.crt
	I0805 23:21:54.763477   36023 certs.go:385] copying /home/jenkins/minikube-integration/19373-9606/.minikube/profiles/ha-044175/apiserver.key.cb584d1b -> /home/jenkins/minikube-integration/19373-9606/.minikube/profiles/ha-044175/apiserver.key
	I0805 23:21:54.763599   36023 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19373-9606/.minikube/profiles/ha-044175/proxy-client.key
	I0805 23:21:54.763614   36023 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19373-9606/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I0805 23:21:54.763627   36023 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19373-9606/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I0805 23:21:54.763637   36023 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19373-9606/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0805 23:21:54.763650   36023 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19373-9606/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0805 23:21:54.763661   36023 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19373-9606/.minikube/profiles/ha-044175/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0805 23:21:54.763671   36023 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19373-9606/.minikube/profiles/ha-044175/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0805 23:21:54.763687   36023 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19373-9606/.minikube/profiles/ha-044175/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0805 23:21:54.763699   36023 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19373-9606/.minikube/profiles/ha-044175/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0805 23:21:54.763746   36023 certs.go:484] found cert: /home/jenkins/minikube-integration/19373-9606/.minikube/certs/16792.pem (1338 bytes)
	W0805 23:21:54.763777   36023 certs.go:480] ignoring /home/jenkins/minikube-integration/19373-9606/.minikube/certs/16792_empty.pem, impossibly tiny 0 bytes
	I0805 23:21:54.763783   36023 certs.go:484] found cert: /home/jenkins/minikube-integration/19373-9606/.minikube/certs/ca-key.pem (1679 bytes)
	I0805 23:21:54.763803   36023 certs.go:484] found cert: /home/jenkins/minikube-integration/19373-9606/.minikube/certs/ca.pem (1082 bytes)
	I0805 23:21:54.763820   36023 certs.go:484] found cert: /home/jenkins/minikube-integration/19373-9606/.minikube/certs/cert.pem (1123 bytes)
	I0805 23:21:54.763841   36023 certs.go:484] found cert: /home/jenkins/minikube-integration/19373-9606/.minikube/certs/key.pem (1679 bytes)
	I0805 23:21:54.763875   36023 certs.go:484] found cert: /home/jenkins/minikube-integration/19373-9606/.minikube/files/etc/ssl/certs/167922.pem (1708 bytes)
	I0805 23:21:54.763899   36023 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19373-9606/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0805 23:21:54.763912   36023 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19373-9606/.minikube/certs/16792.pem -> /usr/share/ca-certificates/16792.pem
	I0805 23:21:54.763923   36023 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19373-9606/.minikube/files/etc/ssl/certs/167922.pem -> /usr/share/ca-certificates/167922.pem
	I0805 23:21:54.764429   36023 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19373-9606/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0805 23:21:54.793469   36023 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19373-9606/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0805 23:21:54.819427   36023 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19373-9606/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0805 23:21:54.845089   36023 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19373-9606/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0805 23:21:54.870074   36023 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19373-9606/.minikube/profiles/ha-044175/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1440 bytes)
	I0805 23:21:54.895799   36023 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19373-9606/.minikube/profiles/ha-044175/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0805 23:21:54.920920   36023 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19373-9606/.minikube/profiles/ha-044175/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0805 23:21:54.946520   36023 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19373-9606/.minikube/profiles/ha-044175/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0805 23:21:54.970748   36023 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19373-9606/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0805 23:21:54.994658   36023 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19373-9606/.minikube/certs/16792.pem --> /usr/share/ca-certificates/16792.pem (1338 bytes)
	I0805 23:21:55.019117   36023 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19373-9606/.minikube/files/etc/ssl/certs/167922.pem --> /usr/share/ca-certificates/167922.pem (1708 bytes)
	I0805 23:21:55.043437   36023 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0805 23:21:55.060413   36023 ssh_runner.go:195] Run: openssl version
	I0805 23:21:55.066853   36023 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/16792.pem && ln -fs /usr/share/ca-certificates/16792.pem /etc/ssl/certs/16792.pem"
	I0805 23:21:55.077479   36023 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/16792.pem
	I0805 23:21:55.082037   36023 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Aug  5 23:03 /usr/share/ca-certificates/16792.pem
	I0805 23:21:55.082095   36023 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/16792.pem
	I0805 23:21:55.087808   36023 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/16792.pem /etc/ssl/certs/51391683.0"
	I0805 23:21:55.097099   36023 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/167922.pem && ln -fs /usr/share/ca-certificates/167922.pem /etc/ssl/certs/167922.pem"
	I0805 23:21:55.107692   36023 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/167922.pem
	I0805 23:21:55.112146   36023 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Aug  5 23:03 /usr/share/ca-certificates/167922.pem
	I0805 23:21:55.112182   36023 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/167922.pem
	I0805 23:21:55.117763   36023 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/167922.pem /etc/ssl/certs/3ec20f2e.0"
	I0805 23:21:55.126784   36023 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0805 23:21:55.137634   36023 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0805 23:21:55.142200   36023 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Aug  5 22:50 /usr/share/ca-certificates/minikubeCA.pem
	I0805 23:21:55.142243   36023 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0805 23:21:55.148293   36023 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0805 23:21:55.157937   36023 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0805 23:21:55.162495   36023 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0805 23:21:55.168345   36023 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0805 23:21:55.174266   36023 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0805 23:21:55.180002   36023 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0805 23:21:55.185923   36023 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0805 23:21:55.191533   36023 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0805 23:21:55.197330   36023 kubeadm.go:392] StartCluster: {Name:ha-044175 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19339/minikube-v1.33.1-1722248113-19339-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.3 Clust
erName:ha-044175 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.57 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.112 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m03 IP:192.168.39.201 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m04 IP:192.168.39.228 Port:0 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshp
od:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountG
ID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0805 23:21:55.197466   36023 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0805 23:21:55.197517   36023 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0805 23:21:55.237272   36023 cri.go:89] found id: "d46bdf5c93d9a335000c2d92e3814610ae1e74850c28c7ec832821e7ed10c1b6"
	I0805 23:21:55.237297   36023 cri.go:89] found id: "d84c6fc25afe5bdf844e9489b06726f7f183fbc38a418926f652ec79c6e9e559"
	I0805 23:21:55.237301   36023 cri.go:89] found id: "1a47cf65b14975f4678f4b5794ac4f45733e19f22e2b659a18baad22d1394d26"
	I0805 23:21:55.237304   36023 cri.go:89] found id: "bb38cdefb5246fc31da8b49e32a081eb2003b9c9de9c7c5941b6e563179848e7"
	I0805 23:21:55.237306   36023 cri.go:89] found id: "2e11762a0814597bbc6d2cdd8b65c5f03a1970af0ad39df0b7e88eb542fad309"
	I0805 23:21:55.237309   36023 cri.go:89] found id: "4617bbebfc992da16ee550b4c2c74a6d4c58299fe2518f6d24c3a10b1e02c941"
	I0805 23:21:55.237312   36023 cri.go:89] found id: "e65205c398221a15eecea1ec1092d54f364a44886b05149400c7be5ffafc3285"
	I0805 23:21:55.237314   36023 cri.go:89] found id: "97fa319bea82614cab7525f9052bcc8a09fad765b260045dbf0d0fa0ca0290b2"
	I0805 23:21:55.237316   36023 cri.go:89] found id: "04c382fd4a32fe8685a6f643ecf7a291e4d542c2223975f9df92991fe566b12a"
	I0805 23:21:55.237321   36023 cri.go:89] found id: "40fc9655d4bc3a83cded30a0628a93c01856e1db81e027d8d131004479df9ed3"
	I0805 23:21:55.237323   36023 cri.go:89] found id: "b0893967672c7dc591bbcf220e56601b8a46fc11f07e63adbadaddec59ec1803"
	I0805 23:21:55.237328   36023 cri.go:89] found id: "2a85f2254a23cdec7e89ff8de2e31b06ddf2853808330965760217f1fd834004"
	I0805 23:21:55.237331   36023 cri.go:89] found id: "0c90a080943378c8bb82560d92b4399ff4ea03ab68d06f0de21852e1df609090"
	I0805 23:21:55.237333   36023 cri.go:89] found id: "52e65ab51d03f5a6abf04b86a788a251259de2c7971b7f676c0b5c5eb33e5849"
	I0805 23:21:55.237337   36023 cri.go:89] found id: ""
	I0805 23:21:55.237377   36023 ssh_runner.go:195] Run: sudo runc list -f json
	
	
	==> CRI-O <==
	Aug 05 23:27:56 ha-044175 crio[3920]: time="2024-08-05 23:27:56.097446161Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=7f6d8ef1-d33c-4d10-b925-666572919ebc name=/runtime.v1.RuntimeService/ListContainers
	Aug 05 23:27:56 ha-044175 crio[3920]: time="2024-08-05 23:27:56.097827250Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:fd7e94739a082d7384a7b998066384667954ebe9cc11847395a104db1a104317,PodSandboxId:77ac7fe6a83e0516a216fd1d55d638ed87cfcdf5723e5e28856ee5df04b14760,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:4,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1722900187738951526,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d30d1a5b-cfbe-4de6-a964-75c32e5dbf62,},Annotations:map[string]string{io.kubernetes.container.hash: 4378961a,io.kubernetes.container.restartCount: 4,io.kubernetes.container.t
erminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6528b925e75da18994cd673a201712eb241eeff865202c130034f40f0a350bb8,PodSandboxId:0f530473c6518daba2504d48da181c58689c44ffd19685987529bd79bbfdd8bf,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,State:CONTAINER_RUNNING,CreatedAt:1722900169724492024,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-044175,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: de889c914a63f88b5552d92d7c04005b,},Annotations:map[string]string{io.kubernetes.container.hash: bcb4918f,io.kubernetes.container.restartCount: 2,io.kubernetes.conta
iner.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7cf9b7cb63859c9cfe968fc20b9dacecfc681905714bc14a19a78ba20314f787,PodSandboxId:f6231b23266daa7beda5c2eb7b84162e5fe7c14db8b3c9ddcd78304bf2ec722c,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:3,},Image:&ImageSpec{Image:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,State:CONTAINER_RUNNING,CreatedAt:1722900162729981582,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-044175,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5280d6dbae40883a34349dd31a13a779,},Annotations:map[string]string{io.kubernetes.container.hash: bd2d1b8f,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessa
gePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:77b449aa0776d43116dbd794f0249b6e5fc5d747d7f6a8bc9604aebafc20ba74,PodSandboxId:2ff7308f4be3e77295c107b65333964734b52e07163e7f28b5c122b5225d1d4d,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1722900156038540995,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-fc5497c4f-wmfql,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: bfc8bad7-d43d-4beb-991e-339a4ce96ab5,},Annotations:map[string]string{io.kubernetes.container.hash: fc00d50e,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kube
rnetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:91055df10dc934cc3b2614f239bef7e465aa9809f34bba79c6de90604d74f7ca,PodSandboxId:68d4fc648e15948920c68a4aad97654ab8e34af2ae6e4e2ecdd3c173abf8148d,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:0,},Image:&ImageSpec{Image:38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12,State:CONTAINER_RUNNING,CreatedAt:1722900137200072773,Labels:map[string]string{io.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-044175,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: fd673cb8fe1efcc8b643555b76eaad93,},Annotations:map[string]string{io.kubernetes.container.hash: 3d55e6dc,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePoli
cy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a11b5de6fd020c3af228be69825e370ecef21ab78d774519dac722cf721bb6e6,PodSandboxId:3f0c789e63c6b8da2eaddf246bf22fac58253370f7977c637db0653e6efb8ad4,Metadata:&ContainerMetadata{Name:coredns,Attempt:2,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1722900124470501686,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-g9bml,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: fd474413-e416-48db-a7bf-f3c40675819b,},Annotations:map[string]string{io.kubernetes.container.hash: 1bd67db4,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"cont
ainerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:224c4967d5e92ceb088df11f70040bbd62d3bf073b04182cb32278b2db2419b1,PodSandboxId:77ac7fe6a83e0516a216fd1d55d638ed87cfcdf5723e5e28856ee5df04b14760,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:3,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1722900122857826642,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d30d1a5b-cfbe-4de6-a964-75c32e5dbf62,},Annotations:map[string]string{io.kubernete
s.container.hash: 4378961a,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:97768d7c5371dd0c06071b82c8baadd28ee604281812facf0dbd4a723ea92274,PodSandboxId:b949dd01383277f7e3efd577b7b6302bc9888e365a2106061d4a3a3119168a36,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:1,},Image:&ImageSpec{Image:917d7814b9b5b870a04ef7bbee5414485a7ba8385be9c26e1df27463f6fb6557,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:917d7814b9b5b870a04ef7bbee5414485a7ba8385be9c26e1df27463f6fb6557,State:CONTAINER_RUNNING,CreatedAt:1722900122962106602,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-xqx4z,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8455705e-b140-4f1e-abff-6a71bbb5415f,},Annotations:map[string]string{io.kubernetes.container.hash: 4a9283b6,io.kube
rnetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8aecc482892c69f412b19a67ecbfb961e4799ff113afee62cf254d8accc9e43a,PodSandboxId:e82fdb05fd230a5ff78128ae533e9617633b9f37f9a0671378ee9706bc2188c5,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1722900122848074225,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-vzhst,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f9c09745-be29-4403-9e7d-f9e4eaae5cac,},Annotations:map[string]string{io.kubernetes.container.hash: 1a8c310a,io.kubernetes.container.ports: [{\"nam
e\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:da62836e55aaaf8eee39a34113a3d41ba6489986d26134bed80020f8c7164507,PodSandboxId:5d40c713023d2ce8f1fd3f024181a8566c041373be34fbbdd28a7966391af628,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1722900122740850436,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-044175,io.kubernetes.pod.namespace: ku
be-system,io.kubernetes.pod.uid: 47fd3d59fe4024c671f4b57dbae12a83,},Annotations:map[string]string{io.kubernetes.container.hash: fa9a7bc3,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5f43d5e7445c285e5783c937039be219df8aaea8c9db899259f8d24c895a378c,PodSandboxId:1e5c99969ac60dcfb40f80c63b009c0e6efc07de9fccdd5c48b9097ed4f8bf63,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,State:CONTAINER_RUNNING,CreatedAt:1722900122542678745,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-vj5sd,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d6c9
cdcb-e1b7-44c8-a6e3-5e5aeb76ba03,},Annotations:map[string]string{io.kubernetes.container.hash: a40979c9,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:dd436770dad332628ad6a3b7fea663d52dda62901d07f6c1bfa5cf82ddae4f61,PodSandboxId:0f530473c6518daba2504d48da181c58689c44ffd19685987529bd79bbfdd8bf,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,State:CONTAINER_EXITED,CreatedAt:1722900122697717291,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-044175,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.
uid: de889c914a63f88b5552d92d7c04005b,},Annotations:map[string]string{io.kubernetes.container.hash: bcb4918f,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:95d6da5b264d99c2ae66291b9df0943d6f8ac4b1743a5bef2caebaaa9fa1694c,PodSandboxId:f6231b23266daa7beda5c2eb7b84162e5fe7c14db8b3c9ddcd78304bf2ec722c,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,State:CONTAINER_EXITED,CreatedAt:1722900122673726758,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-044175,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5280d6dbae40883a3
4349dd31a13a779,},Annotations:map[string]string{io.kubernetes.container.hash: bd2d1b8f,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5537b3a8dbcb27d26dc336a48652fdd3385ec0fb3b5169e72e472a665bc2e3ed,PodSandboxId:0b1220acf56ca1985bed119e03dfdc76cb09d54439c45a2488b7b06933c1f3be,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,State:CONTAINER_RUNNING,CreatedAt:1722900122644546831,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-044175,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 87091e6c521c934e57911d0cd84fc454,},Ann
otations:map[string]string{io.kubernetes.container.hash: 7337c8d9,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d46bdf5c93d9a335000c2d92e3814610ae1e74850c28c7ec832821e7ed10c1b6,PodSandboxId:212b1287cb785d37bec039a02eceff99c8d4258dd1905092b47149fba9f31b8e,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1722900103405605088,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-g9bml,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: fd474413-e416-48db-a7bf-f3c40675819b,},Annotations:map[string]string{io.ku
bernetes.container.hash: 1bd67db4,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:14f7140ac408890dd788c7a9d6a9857531edad86ff751157ac035e6ab0d4afdc,PodSandboxId:1bf94d816bd6b0f9325f20c0b2453330291a5dfa79448419ddd925a97f951bb9,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_EXITED,CreatedAt:1722899618925272516,Labels:map[string]str
ing{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-fc5497c4f-wmfql,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: bfc8bad7-d43d-4beb-991e-339a4ce96ab5,},Annotations:map[string]string{io.kubernetes.container.hash: fc00d50e,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e65205c398221a15eecea1ec1092d54f364a44886b05149400c7be5ffafc3285,PodSandboxId:0df1c00cbbb9d6891997d631537dd7662e552d8dca3cea20f0b653ed34f6f7bb,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1722899473822035995,Labels:map[string]string{io.kubernetes.container.name: cor
edns,io.kubernetes.pod.name: coredns-7db6d8ff4d-vzhst,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f9c09745-be29-4403-9e7d-f9e4eaae5cac,},Annotations:map[string]string{io.kubernetes.container.hash: 1a8c310a,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:97fa319bea82614cab7525f9052bcc8a09fad765b260045dbf0d0fa0ca0290b2,PodSandboxId:4f369251bc6de76b6eba2d8a6404cb53a6bcba17f58bd09854de9edd65d080fa,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:docker.io/kindest/kindnetd@sha256:4067b91686869e19bac601aec305ba55d2e74cdcb91347869bfb4fd3a26cd3c3,Annotations:map[string]
string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:917d7814b9b5b870a04ef7bbee5414485a7ba8385be9c26e1df27463f6fb6557,State:CONTAINER_EXITED,CreatedAt:1722899461696983959,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-xqx4z,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8455705e-b140-4f1e-abff-6a71bbb5415f,},Annotations:map[string]string{io.kubernetes.container.hash: 4a9283b6,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:04c382fd4a32fe8685a6f643ecf7a291e4d542c2223975f9df92991fe566b12a,PodSandboxId:b7b77d3f5c8a24f9906eb41c479b7254cd21f7c4d0c34b7014bdfa5f666df829,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,Annotations:map[string]string{},UserSpecifiedImage:,Runtime
Handler:,},ImageRef:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,State:CONTAINER_EXITED,CreatedAt:1722899457757352731,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-vj5sd,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d6c9cdcb-e1b7-44c8-a6e3-5e5aeb76ba03,},Annotations:map[string]string{io.kubernetes.container.hash: a40979c9,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b0893967672c7dc591bbcf220e56601b8a46fc11f07e63adbadaddec59ec1803,PodSandboxId:c7f5da3aca5fb3bac198b9144677aac33c3f5317946dad29f46e726a35d2c596,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f06
2788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_EXITED,CreatedAt:1722899438287916526,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-044175,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 47fd3d59fe4024c671f4b57dbae12a83,},Annotations:map[string]string{io.kubernetes.container.hash: fa9a7bc3,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2a85f2254a23cdec7e89ff8de2e31b06ddf2853808330965760217f1fd834004,PodSandboxId:57dd6eb50740256e4db3c59d0c1d850b0ba784d01abbeb7f8ea139160576fc43,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795
e2,State:CONTAINER_EXITED,CreatedAt:1722899438266931166,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-044175,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 87091e6c521c934e57911d0cd84fc454,},Annotations:map[string]string{io.kubernetes.container.hash: 7337c8d9,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=7f6d8ef1-d33c-4d10-b925-666572919ebc name=/runtime.v1.RuntimeService/ListContainers
	Aug 05 23:27:56 ha-044175 crio[3920]: time="2024-08-05 23:27:56.107812833Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=498ec4f1-67df-44d1-a9b5-4ab6f16a4d03 name=/runtime.v1.RuntimeService/Version
	Aug 05 23:27:56 ha-044175 crio[3920]: time="2024-08-05 23:27:56.107928263Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=498ec4f1-67df-44d1-a9b5-4ab6f16a4d03 name=/runtime.v1.RuntimeService/Version
	Aug 05 23:27:56 ha-044175 crio[3920]: time="2024-08-05 23:27:56.109599227Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=bd8e6c9f-fa14-42da-a351-305ea6af7101 name=/runtime.v1.ImageService/ImageFsInfo
	Aug 05 23:27:56 ha-044175 crio[3920]: time="2024-08-05 23:27:56.110040557Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1722900476110018513,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:154770,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=bd8e6c9f-fa14-42da-a351-305ea6af7101 name=/runtime.v1.ImageService/ImageFsInfo
	Aug 05 23:27:56 ha-044175 crio[3920]: time="2024-08-05 23:27:56.110691823Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=1938980a-3911-45a3-8ec1-f679126f42d8 name=/runtime.v1.RuntimeService/ListContainers
	Aug 05 23:27:56 ha-044175 crio[3920]: time="2024-08-05 23:27:56.110763817Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=1938980a-3911-45a3-8ec1-f679126f42d8 name=/runtime.v1.RuntimeService/ListContainers
	Aug 05 23:27:56 ha-044175 crio[3920]: time="2024-08-05 23:27:56.111266118Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:fd7e94739a082d7384a7b998066384667954ebe9cc11847395a104db1a104317,PodSandboxId:77ac7fe6a83e0516a216fd1d55d638ed87cfcdf5723e5e28856ee5df04b14760,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:4,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1722900187738951526,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d30d1a5b-cfbe-4de6-a964-75c32e5dbf62,},Annotations:map[string]string{io.kubernetes.container.hash: 4378961a,io.kubernetes.container.restartCount: 4,io.kubernetes.container.t
erminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6528b925e75da18994cd673a201712eb241eeff865202c130034f40f0a350bb8,PodSandboxId:0f530473c6518daba2504d48da181c58689c44ffd19685987529bd79bbfdd8bf,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,State:CONTAINER_RUNNING,CreatedAt:1722900169724492024,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-044175,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: de889c914a63f88b5552d92d7c04005b,},Annotations:map[string]string{io.kubernetes.container.hash: bcb4918f,io.kubernetes.container.restartCount: 2,io.kubernetes.conta
iner.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7cf9b7cb63859c9cfe968fc20b9dacecfc681905714bc14a19a78ba20314f787,PodSandboxId:f6231b23266daa7beda5c2eb7b84162e5fe7c14db8b3c9ddcd78304bf2ec722c,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:3,},Image:&ImageSpec{Image:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,State:CONTAINER_RUNNING,CreatedAt:1722900162729981582,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-044175,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5280d6dbae40883a34349dd31a13a779,},Annotations:map[string]string{io.kubernetes.container.hash: bd2d1b8f,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessa
gePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:77b449aa0776d43116dbd794f0249b6e5fc5d747d7f6a8bc9604aebafc20ba74,PodSandboxId:2ff7308f4be3e77295c107b65333964734b52e07163e7f28b5c122b5225d1d4d,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1722900156038540995,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-fc5497c4f-wmfql,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: bfc8bad7-d43d-4beb-991e-339a4ce96ab5,},Annotations:map[string]string{io.kubernetes.container.hash: fc00d50e,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kube
rnetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:91055df10dc934cc3b2614f239bef7e465aa9809f34bba79c6de90604d74f7ca,PodSandboxId:68d4fc648e15948920c68a4aad97654ab8e34af2ae6e4e2ecdd3c173abf8148d,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:0,},Image:&ImageSpec{Image:38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12,State:CONTAINER_RUNNING,CreatedAt:1722900137200072773,Labels:map[string]string{io.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-044175,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: fd673cb8fe1efcc8b643555b76eaad93,},Annotations:map[string]string{io.kubernetes.container.hash: 3d55e6dc,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePoli
cy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a11b5de6fd020c3af228be69825e370ecef21ab78d774519dac722cf721bb6e6,PodSandboxId:3f0c789e63c6b8da2eaddf246bf22fac58253370f7977c637db0653e6efb8ad4,Metadata:&ContainerMetadata{Name:coredns,Attempt:2,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1722900124470501686,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-g9bml,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: fd474413-e416-48db-a7bf-f3c40675819b,},Annotations:map[string]string{io.kubernetes.container.hash: 1bd67db4,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"cont
ainerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:224c4967d5e92ceb088df11f70040bbd62d3bf073b04182cb32278b2db2419b1,PodSandboxId:77ac7fe6a83e0516a216fd1d55d638ed87cfcdf5723e5e28856ee5df04b14760,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:3,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1722900122857826642,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d30d1a5b-cfbe-4de6-a964-75c32e5dbf62,},Annotations:map[string]string{io.kubernete
s.container.hash: 4378961a,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:97768d7c5371dd0c06071b82c8baadd28ee604281812facf0dbd4a723ea92274,PodSandboxId:b949dd01383277f7e3efd577b7b6302bc9888e365a2106061d4a3a3119168a36,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:1,},Image:&ImageSpec{Image:917d7814b9b5b870a04ef7bbee5414485a7ba8385be9c26e1df27463f6fb6557,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:917d7814b9b5b870a04ef7bbee5414485a7ba8385be9c26e1df27463f6fb6557,State:CONTAINER_RUNNING,CreatedAt:1722900122962106602,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-xqx4z,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8455705e-b140-4f1e-abff-6a71bbb5415f,},Annotations:map[string]string{io.kubernetes.container.hash: 4a9283b6,io.kube
rnetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8aecc482892c69f412b19a67ecbfb961e4799ff113afee62cf254d8accc9e43a,PodSandboxId:e82fdb05fd230a5ff78128ae533e9617633b9f37f9a0671378ee9706bc2188c5,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1722900122848074225,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-vzhst,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f9c09745-be29-4403-9e7d-f9e4eaae5cac,},Annotations:map[string]string{io.kubernetes.container.hash: 1a8c310a,io.kubernetes.container.ports: [{\"nam
e\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:da62836e55aaaf8eee39a34113a3d41ba6489986d26134bed80020f8c7164507,PodSandboxId:5d40c713023d2ce8f1fd3f024181a8566c041373be34fbbdd28a7966391af628,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1722900122740850436,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-044175,io.kubernetes.pod.namespace: ku
be-system,io.kubernetes.pod.uid: 47fd3d59fe4024c671f4b57dbae12a83,},Annotations:map[string]string{io.kubernetes.container.hash: fa9a7bc3,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5f43d5e7445c285e5783c937039be219df8aaea8c9db899259f8d24c895a378c,PodSandboxId:1e5c99969ac60dcfb40f80c63b009c0e6efc07de9fccdd5c48b9097ed4f8bf63,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,State:CONTAINER_RUNNING,CreatedAt:1722900122542678745,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-vj5sd,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d6c9
cdcb-e1b7-44c8-a6e3-5e5aeb76ba03,},Annotations:map[string]string{io.kubernetes.container.hash: a40979c9,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:dd436770dad332628ad6a3b7fea663d52dda62901d07f6c1bfa5cf82ddae4f61,PodSandboxId:0f530473c6518daba2504d48da181c58689c44ffd19685987529bd79bbfdd8bf,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,State:CONTAINER_EXITED,CreatedAt:1722900122697717291,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-044175,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.
uid: de889c914a63f88b5552d92d7c04005b,},Annotations:map[string]string{io.kubernetes.container.hash: bcb4918f,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:95d6da5b264d99c2ae66291b9df0943d6f8ac4b1743a5bef2caebaaa9fa1694c,PodSandboxId:f6231b23266daa7beda5c2eb7b84162e5fe7c14db8b3c9ddcd78304bf2ec722c,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,State:CONTAINER_EXITED,CreatedAt:1722900122673726758,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-044175,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5280d6dbae40883a3
4349dd31a13a779,},Annotations:map[string]string{io.kubernetes.container.hash: bd2d1b8f,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5537b3a8dbcb27d26dc336a48652fdd3385ec0fb3b5169e72e472a665bc2e3ed,PodSandboxId:0b1220acf56ca1985bed119e03dfdc76cb09d54439c45a2488b7b06933c1f3be,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,State:CONTAINER_RUNNING,CreatedAt:1722900122644546831,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-044175,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 87091e6c521c934e57911d0cd84fc454,},Ann
otations:map[string]string{io.kubernetes.container.hash: 7337c8d9,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d46bdf5c93d9a335000c2d92e3814610ae1e74850c28c7ec832821e7ed10c1b6,PodSandboxId:212b1287cb785d37bec039a02eceff99c8d4258dd1905092b47149fba9f31b8e,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1722900103405605088,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-g9bml,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: fd474413-e416-48db-a7bf-f3c40675819b,},Annotations:map[string]string{io.ku
bernetes.container.hash: 1bd67db4,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:14f7140ac408890dd788c7a9d6a9857531edad86ff751157ac035e6ab0d4afdc,PodSandboxId:1bf94d816bd6b0f9325f20c0b2453330291a5dfa79448419ddd925a97f951bb9,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_EXITED,CreatedAt:1722899618925272516,Labels:map[string]str
ing{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-fc5497c4f-wmfql,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: bfc8bad7-d43d-4beb-991e-339a4ce96ab5,},Annotations:map[string]string{io.kubernetes.container.hash: fc00d50e,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e65205c398221a15eecea1ec1092d54f364a44886b05149400c7be5ffafc3285,PodSandboxId:0df1c00cbbb9d6891997d631537dd7662e552d8dca3cea20f0b653ed34f6f7bb,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1722899473822035995,Labels:map[string]string{io.kubernetes.container.name: cor
edns,io.kubernetes.pod.name: coredns-7db6d8ff4d-vzhst,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f9c09745-be29-4403-9e7d-f9e4eaae5cac,},Annotations:map[string]string{io.kubernetes.container.hash: 1a8c310a,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:97fa319bea82614cab7525f9052bcc8a09fad765b260045dbf0d0fa0ca0290b2,PodSandboxId:4f369251bc6de76b6eba2d8a6404cb53a6bcba17f58bd09854de9edd65d080fa,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:docker.io/kindest/kindnetd@sha256:4067b91686869e19bac601aec305ba55d2e74cdcb91347869bfb4fd3a26cd3c3,Annotations:map[string]
string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:917d7814b9b5b870a04ef7bbee5414485a7ba8385be9c26e1df27463f6fb6557,State:CONTAINER_EXITED,CreatedAt:1722899461696983959,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-xqx4z,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8455705e-b140-4f1e-abff-6a71bbb5415f,},Annotations:map[string]string{io.kubernetes.container.hash: 4a9283b6,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:04c382fd4a32fe8685a6f643ecf7a291e4d542c2223975f9df92991fe566b12a,PodSandboxId:b7b77d3f5c8a24f9906eb41c479b7254cd21f7c4d0c34b7014bdfa5f666df829,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,Annotations:map[string]string{},UserSpecifiedImage:,Runtime
Handler:,},ImageRef:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,State:CONTAINER_EXITED,CreatedAt:1722899457757352731,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-vj5sd,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d6c9cdcb-e1b7-44c8-a6e3-5e5aeb76ba03,},Annotations:map[string]string{io.kubernetes.container.hash: a40979c9,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b0893967672c7dc591bbcf220e56601b8a46fc11f07e63adbadaddec59ec1803,PodSandboxId:c7f5da3aca5fb3bac198b9144677aac33c3f5317946dad29f46e726a35d2c596,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f06
2788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_EXITED,CreatedAt:1722899438287916526,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-044175,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 47fd3d59fe4024c671f4b57dbae12a83,},Annotations:map[string]string{io.kubernetes.container.hash: fa9a7bc3,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2a85f2254a23cdec7e89ff8de2e31b06ddf2853808330965760217f1fd834004,PodSandboxId:57dd6eb50740256e4db3c59d0c1d850b0ba784d01abbeb7f8ea139160576fc43,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795
e2,State:CONTAINER_EXITED,CreatedAt:1722899438266931166,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-044175,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 87091e6c521c934e57911d0cd84fc454,},Annotations:map[string]string{io.kubernetes.container.hash: 7337c8d9,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=1938980a-3911-45a3-8ec1-f679126f42d8 name=/runtime.v1.RuntimeService/ListContainers
	Aug 05 23:27:56 ha-044175 crio[3920]: time="2024-08-05 23:27:56.139873973Z" level=debug msg="Request: &ListImagesRequest{Filter:nil,}" file="otel-collector/interceptors.go:62" id=84763c0f-9391-46f8-8b9a-b6dcd281f529 name=/runtime.v1.ImageService/ListImages
	Aug 05 23:27:56 ha-044175 crio[3920]: time="2024-08-05 23:27:56.140474070Z" level=debug msg="Response: &ListImagesResponse{Images:[]*Image{&Image{Id:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,RepoTags:[registry.k8s.io/kube-apiserver:v1.30.3],RepoDigests:[registry.k8s.io/kube-apiserver@sha256:a36d558835e48950f6d13b1edbe20605b8dfbc81e088f58221796631e107966c registry.k8s.io/kube-apiserver@sha256:a3a6c80030a6e720734ae3291448388f70b6f1d463f103e4f06f358f8a170315],Size_:117609954,Uid:&Int64Value{Value:0,},Username:,Spec:nil,Pinned:false,},&Image{Id:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,RepoTags:[registry.k8s.io/kube-controller-manager:v1.30.3],RepoDigests:[registry.k8s.io/kube-controller-manager@sha256:eff43da55a29a5e66ec9480f28233d733a6a8433b7a46f6e8c07086fa4ef69b7 registry.k8s.io/kube-controller-manager@sha256:fa179d147c6bacddd1586f6d12ff79a844e951c7b159fdcb92cdf56f3033d91e],Size_:112198984,Uid:&Int64Value{Value:0,},Username:,Spec:nil,Pinned:false,},&Image{
Id:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,RepoTags:[registry.k8s.io/kube-scheduler:v1.30.3],RepoDigests:[registry.k8s.io/kube-scheduler@sha256:1738178fb116d10e7cde2cfc3671f5dfdad518d773677af740483f2dfe674266 registry.k8s.io/kube-scheduler@sha256:2147ab5d2c73dd84e28332fcbee6826d1648eed30a531a52a96501b37d7ee4e4],Size_:63051080,Uid:&Int64Value{Value:0,},Username:,Spec:nil,Pinned:false,},&Image{Id:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,RepoTags:[registry.k8s.io/kube-proxy:v1.30.3],RepoDigests:[registry.k8s.io/kube-proxy@sha256:8c178447597867a03bbcdf0d1ce43fc8f6807ead2321bd1ec0e845a2f12dad80 registry.k8s.io/kube-proxy@sha256:b26e535e8ee1cbd7dc5642fb61bd36e9d23f32e9242ae0010b2905656e664f65],Size_:85953945,Uid:nil,Username:,Spec:nil,Pinned:false,},&Image{Id:e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c,RepoTags:[registry.k8s.io/pause:3.9],RepoDigests:[registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097 re
gistry.k8s.io/pause@sha256:8d4106c88ec0bd28001e34c975d65175d994072d65341f62a8ab0754b0fafe10],Size_:750414,Uid:&Int64Value{Value:65535,},Username:,Spec:nil,Pinned:true,},&Image{Id:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,RepoTags:[registry.k8s.io/etcd:3.5.12-0],RepoDigests:[registry.k8s.io/etcd@sha256:2e6b9c67730f1f1dce4c6e16d60135e00608728567f537e8ff70c244756cbb62 registry.k8s.io/etcd@sha256:44a8e24dcbba3470ee1fee21d5e88d128c936e9b55d4bc51fbef8086f8ed123b],Size_:150779692,Uid:&Int64Value{Value:0,},Username:,Spec:nil,Pinned:false,},&Image{Id:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,RepoTags:[registry.k8s.io/coredns/coredns:v1.11.1],RepoDigests:[registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1 registry.k8s.io/coredns/coredns@sha256:2169b3b96af988cf69d7dd69efbcc59433eb027320eb185c6110e0850b997870],Size_:61245718,Uid:nil,Username:nonroot,Spec:nil,Pinned:false,},&Image{Id:6e38f40d628db3002f5617342c8872c935de530d8
67d0f709a2fbda1a302a562,RepoTags:[gcr.io/k8s-minikube/storage-provisioner:v5],RepoDigests:[gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944 gcr.io/k8s-minikube/storage-provisioner@sha256:c4c05d6ad6c0f24d87b39e596d4dddf64bec3e0d84f5b36e4511d4ebf583f38f],Size_:31470524,Uid:nil,Username:,Spec:nil,Pinned:false,},&Image{Id:5cc3abe5717dbf8e0db6fc2e890c3f82fe95e2ce5c5d7320f98a5b71c767d42f,RepoTags:[docker.io/kindest/kindnetd:v20240715-585640e9],RepoDigests:[docker.io/kindest/kindnetd@sha256:3b93f681916ee780a9941d48cb20622486c08af54f8d87d801412bcca0832115 docker.io/kindest/kindnetd@sha256:88ed2adbc140254762f98fad7f4b16d279117356ebaf95aebf191713c828a493],Size_:87165492,Uid:nil,Username:,Spec:nil,Pinned:false,},&Image{Id:38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12,RepoTags:[ghcr.io/kube-vip/kube-vip:v0.8.0],RepoDigests:[ghcr.io/kube-vip/kube-vip@sha256:360f0c5d02322075cc80edb9e4e0d2171e941e55072184f1f902203fafc81d0f ghcr.io/kube-vip/kub
e-vip@sha256:7eb725aff32fd4b31484f6e8e44b538f8403ebc8bd4218ea0ec28218682afff1],Size_:49570267,Uid:nil,Username:,Spec:nil,Pinned:false,},&Image{Id:917d7814b9b5b870a04ef7bbee5414485a7ba8385be9c26e1df27463f6fb6557,RepoTags:[docker.io/kindest/kindnetd:v20240730-75a5af0c],RepoDigests:[docker.io/kindest/kindnetd@sha256:4067b91686869e19bac601aec305ba55d2e74cdcb91347869bfb4fd3a26cd3c3 docker.io/kindest/kindnetd@sha256:60b58d454ebdf7f0f66e7550fc2be7b6f08dee8b39bedd62c26d42c3a5cf5c6a],Size_:87165492,Uid:nil,Username:,Spec:nil,Pinned:false,},&Image{Id:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,RepoTags:[gcr.io/k8s-minikube/busybox:1.28],RepoDigests:[gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335 gcr.io/k8s-minikube/busybox@sha256:9afb80db71730dbb303fe00765cbf34bddbdc6b66e49897fc2e1861967584b12],Size_:1363676,Uid:nil,Username:,Spec:nil,Pinned:false,},},}" file="otel-collector/interceptors.go:74" id=84763c0f-9391-46f8-8b9a-b6dcd281f529 name=/runtim
e.v1.ImageService/ListImages
	Aug 05 23:27:56 ha-044175 crio[3920]: time="2024-08-05 23:27:56.161796990Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=6c4d2f20-60a9-4521-9087-7d0e77af9619 name=/runtime.v1.RuntimeService/Version
	Aug 05 23:27:56 ha-044175 crio[3920]: time="2024-08-05 23:27:56.161883334Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=6c4d2f20-60a9-4521-9087-7d0e77af9619 name=/runtime.v1.RuntimeService/Version
	Aug 05 23:27:56 ha-044175 crio[3920]: time="2024-08-05 23:27:56.162890493Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=c68a510c-95e8-4132-a3d6-911e30d77c70 name=/runtime.v1.ImageService/ImageFsInfo
	Aug 05 23:27:56 ha-044175 crio[3920]: time="2024-08-05 23:27:56.163648742Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1722900476163569899,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:154770,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=c68a510c-95e8-4132-a3d6-911e30d77c70 name=/runtime.v1.ImageService/ImageFsInfo
	Aug 05 23:27:56 ha-044175 crio[3920]: time="2024-08-05 23:27:56.164970899Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=224641a1-f361-45d5-83b3-b391e28e059a name=/runtime.v1.RuntimeService/ListContainers
	Aug 05 23:27:56 ha-044175 crio[3920]: time="2024-08-05 23:27:56.165048545Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=224641a1-f361-45d5-83b3-b391e28e059a name=/runtime.v1.RuntimeService/ListContainers
	Aug 05 23:27:56 ha-044175 crio[3920]: time="2024-08-05 23:27:56.166569817Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:fd7e94739a082d7384a7b998066384667954ebe9cc11847395a104db1a104317,PodSandboxId:77ac7fe6a83e0516a216fd1d55d638ed87cfcdf5723e5e28856ee5df04b14760,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:4,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1722900187738951526,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d30d1a5b-cfbe-4de6-a964-75c32e5dbf62,},Annotations:map[string]string{io.kubernetes.container.hash: 4378961a,io.kubernetes.container.restartCount: 4,io.kubernetes.container.t
erminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6528b925e75da18994cd673a201712eb241eeff865202c130034f40f0a350bb8,PodSandboxId:0f530473c6518daba2504d48da181c58689c44ffd19685987529bd79bbfdd8bf,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,State:CONTAINER_RUNNING,CreatedAt:1722900169724492024,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-044175,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: de889c914a63f88b5552d92d7c04005b,},Annotations:map[string]string{io.kubernetes.container.hash: bcb4918f,io.kubernetes.container.restartCount: 2,io.kubernetes.conta
iner.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7cf9b7cb63859c9cfe968fc20b9dacecfc681905714bc14a19a78ba20314f787,PodSandboxId:f6231b23266daa7beda5c2eb7b84162e5fe7c14db8b3c9ddcd78304bf2ec722c,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:3,},Image:&ImageSpec{Image:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,State:CONTAINER_RUNNING,CreatedAt:1722900162729981582,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-044175,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5280d6dbae40883a34349dd31a13a779,},Annotations:map[string]string{io.kubernetes.container.hash: bd2d1b8f,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessa
gePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:77b449aa0776d43116dbd794f0249b6e5fc5d747d7f6a8bc9604aebafc20ba74,PodSandboxId:2ff7308f4be3e77295c107b65333964734b52e07163e7f28b5c122b5225d1d4d,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1722900156038540995,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-fc5497c4f-wmfql,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: bfc8bad7-d43d-4beb-991e-339a4ce96ab5,},Annotations:map[string]string{io.kubernetes.container.hash: fc00d50e,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kube
rnetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:91055df10dc934cc3b2614f239bef7e465aa9809f34bba79c6de90604d74f7ca,PodSandboxId:68d4fc648e15948920c68a4aad97654ab8e34af2ae6e4e2ecdd3c173abf8148d,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:0,},Image:&ImageSpec{Image:38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12,State:CONTAINER_RUNNING,CreatedAt:1722900137200072773,Labels:map[string]string{io.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-044175,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: fd673cb8fe1efcc8b643555b76eaad93,},Annotations:map[string]string{io.kubernetes.container.hash: 3d55e6dc,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePoli
cy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a11b5de6fd020c3af228be69825e370ecef21ab78d774519dac722cf721bb6e6,PodSandboxId:3f0c789e63c6b8da2eaddf246bf22fac58253370f7977c637db0653e6efb8ad4,Metadata:&ContainerMetadata{Name:coredns,Attempt:2,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1722900124470501686,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-g9bml,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: fd474413-e416-48db-a7bf-f3c40675819b,},Annotations:map[string]string{io.kubernetes.container.hash: 1bd67db4,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"cont
ainerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:224c4967d5e92ceb088df11f70040bbd62d3bf073b04182cb32278b2db2419b1,PodSandboxId:77ac7fe6a83e0516a216fd1d55d638ed87cfcdf5723e5e28856ee5df04b14760,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:3,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1722900122857826642,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d30d1a5b-cfbe-4de6-a964-75c32e5dbf62,},Annotations:map[string]string{io.kubernete
s.container.hash: 4378961a,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:97768d7c5371dd0c06071b82c8baadd28ee604281812facf0dbd4a723ea92274,PodSandboxId:b949dd01383277f7e3efd577b7b6302bc9888e365a2106061d4a3a3119168a36,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:1,},Image:&ImageSpec{Image:917d7814b9b5b870a04ef7bbee5414485a7ba8385be9c26e1df27463f6fb6557,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:917d7814b9b5b870a04ef7bbee5414485a7ba8385be9c26e1df27463f6fb6557,State:CONTAINER_RUNNING,CreatedAt:1722900122962106602,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-xqx4z,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8455705e-b140-4f1e-abff-6a71bbb5415f,},Annotations:map[string]string{io.kubernetes.container.hash: 4a9283b6,io.kube
rnetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8aecc482892c69f412b19a67ecbfb961e4799ff113afee62cf254d8accc9e43a,PodSandboxId:e82fdb05fd230a5ff78128ae533e9617633b9f37f9a0671378ee9706bc2188c5,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1722900122848074225,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-vzhst,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f9c09745-be29-4403-9e7d-f9e4eaae5cac,},Annotations:map[string]string{io.kubernetes.container.hash: 1a8c310a,io.kubernetes.container.ports: [{\"nam
e\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:da62836e55aaaf8eee39a34113a3d41ba6489986d26134bed80020f8c7164507,PodSandboxId:5d40c713023d2ce8f1fd3f024181a8566c041373be34fbbdd28a7966391af628,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1722900122740850436,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-044175,io.kubernetes.pod.namespace: ku
be-system,io.kubernetes.pod.uid: 47fd3d59fe4024c671f4b57dbae12a83,},Annotations:map[string]string{io.kubernetes.container.hash: fa9a7bc3,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5f43d5e7445c285e5783c937039be219df8aaea8c9db899259f8d24c895a378c,PodSandboxId:1e5c99969ac60dcfb40f80c63b009c0e6efc07de9fccdd5c48b9097ed4f8bf63,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,State:CONTAINER_RUNNING,CreatedAt:1722900122542678745,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-vj5sd,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d6c9
cdcb-e1b7-44c8-a6e3-5e5aeb76ba03,},Annotations:map[string]string{io.kubernetes.container.hash: a40979c9,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:dd436770dad332628ad6a3b7fea663d52dda62901d07f6c1bfa5cf82ddae4f61,PodSandboxId:0f530473c6518daba2504d48da181c58689c44ffd19685987529bd79bbfdd8bf,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,State:CONTAINER_EXITED,CreatedAt:1722900122697717291,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-044175,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.
uid: de889c914a63f88b5552d92d7c04005b,},Annotations:map[string]string{io.kubernetes.container.hash: bcb4918f,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:95d6da5b264d99c2ae66291b9df0943d6f8ac4b1743a5bef2caebaaa9fa1694c,PodSandboxId:f6231b23266daa7beda5c2eb7b84162e5fe7c14db8b3c9ddcd78304bf2ec722c,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,State:CONTAINER_EXITED,CreatedAt:1722900122673726758,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-044175,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5280d6dbae40883a3
4349dd31a13a779,},Annotations:map[string]string{io.kubernetes.container.hash: bd2d1b8f,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5537b3a8dbcb27d26dc336a48652fdd3385ec0fb3b5169e72e472a665bc2e3ed,PodSandboxId:0b1220acf56ca1985bed119e03dfdc76cb09d54439c45a2488b7b06933c1f3be,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,State:CONTAINER_RUNNING,CreatedAt:1722900122644546831,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-044175,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 87091e6c521c934e57911d0cd84fc454,},Ann
otations:map[string]string{io.kubernetes.container.hash: 7337c8d9,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d46bdf5c93d9a335000c2d92e3814610ae1e74850c28c7ec832821e7ed10c1b6,PodSandboxId:212b1287cb785d37bec039a02eceff99c8d4258dd1905092b47149fba9f31b8e,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1722900103405605088,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-g9bml,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: fd474413-e416-48db-a7bf-f3c40675819b,},Annotations:map[string]string{io.ku
bernetes.container.hash: 1bd67db4,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:14f7140ac408890dd788c7a9d6a9857531edad86ff751157ac035e6ab0d4afdc,PodSandboxId:1bf94d816bd6b0f9325f20c0b2453330291a5dfa79448419ddd925a97f951bb9,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_EXITED,CreatedAt:1722899618925272516,Labels:map[string]str
ing{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-fc5497c4f-wmfql,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: bfc8bad7-d43d-4beb-991e-339a4ce96ab5,},Annotations:map[string]string{io.kubernetes.container.hash: fc00d50e,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e65205c398221a15eecea1ec1092d54f364a44886b05149400c7be5ffafc3285,PodSandboxId:0df1c00cbbb9d6891997d631537dd7662e552d8dca3cea20f0b653ed34f6f7bb,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1722899473822035995,Labels:map[string]string{io.kubernetes.container.name: cor
edns,io.kubernetes.pod.name: coredns-7db6d8ff4d-vzhst,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f9c09745-be29-4403-9e7d-f9e4eaae5cac,},Annotations:map[string]string{io.kubernetes.container.hash: 1a8c310a,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:97fa319bea82614cab7525f9052bcc8a09fad765b260045dbf0d0fa0ca0290b2,PodSandboxId:4f369251bc6de76b6eba2d8a6404cb53a6bcba17f58bd09854de9edd65d080fa,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:docker.io/kindest/kindnetd@sha256:4067b91686869e19bac601aec305ba55d2e74cdcb91347869bfb4fd3a26cd3c3,Annotations:map[string]
string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:917d7814b9b5b870a04ef7bbee5414485a7ba8385be9c26e1df27463f6fb6557,State:CONTAINER_EXITED,CreatedAt:1722899461696983959,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-xqx4z,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8455705e-b140-4f1e-abff-6a71bbb5415f,},Annotations:map[string]string{io.kubernetes.container.hash: 4a9283b6,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:04c382fd4a32fe8685a6f643ecf7a291e4d542c2223975f9df92991fe566b12a,PodSandboxId:b7b77d3f5c8a24f9906eb41c479b7254cd21f7c4d0c34b7014bdfa5f666df829,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,Annotations:map[string]string{},UserSpecifiedImage:,Runtime
Handler:,},ImageRef:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,State:CONTAINER_EXITED,CreatedAt:1722899457757352731,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-vj5sd,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d6c9cdcb-e1b7-44c8-a6e3-5e5aeb76ba03,},Annotations:map[string]string{io.kubernetes.container.hash: a40979c9,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b0893967672c7dc591bbcf220e56601b8a46fc11f07e63adbadaddec59ec1803,PodSandboxId:c7f5da3aca5fb3bac198b9144677aac33c3f5317946dad29f46e726a35d2c596,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f06
2788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_EXITED,CreatedAt:1722899438287916526,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-044175,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 47fd3d59fe4024c671f4b57dbae12a83,},Annotations:map[string]string{io.kubernetes.container.hash: fa9a7bc3,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2a85f2254a23cdec7e89ff8de2e31b06ddf2853808330965760217f1fd834004,PodSandboxId:57dd6eb50740256e4db3c59d0c1d850b0ba784d01abbeb7f8ea139160576fc43,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795
e2,State:CONTAINER_EXITED,CreatedAt:1722899438266931166,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-044175,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 87091e6c521c934e57911d0cd84fc454,},Annotations:map[string]string{io.kubernetes.container.hash: 7337c8d9,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=224641a1-f361-45d5-83b3-b391e28e059a name=/runtime.v1.RuntimeService/ListContainers
	Aug 05 23:27:56 ha-044175 crio[3920]: time="2024-08-05 23:27:56.216671841Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=7432db99-a648-4c09-b947-73c98024aa70 name=/runtime.v1.RuntimeService/Version
	Aug 05 23:27:56 ha-044175 crio[3920]: time="2024-08-05 23:27:56.216765097Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=7432db99-a648-4c09-b947-73c98024aa70 name=/runtime.v1.RuntimeService/Version
	Aug 05 23:27:56 ha-044175 crio[3920]: time="2024-08-05 23:27:56.217987305Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=94d05013-190c-441b-91c4-7e85d9868914 name=/runtime.v1.ImageService/ImageFsInfo
	Aug 05 23:27:56 ha-044175 crio[3920]: time="2024-08-05 23:27:56.218660998Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1722900476218614518,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:154770,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=94d05013-190c-441b-91c4-7e85d9868914 name=/runtime.v1.ImageService/ImageFsInfo
	Aug 05 23:27:56 ha-044175 crio[3920]: time="2024-08-05 23:27:56.219567647Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=a9af1568-23a4-4051-8ece-4ff56f4847f8 name=/runtime.v1.RuntimeService/ListContainers
	Aug 05 23:27:56 ha-044175 crio[3920]: time="2024-08-05 23:27:56.219715359Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=a9af1568-23a4-4051-8ece-4ff56f4847f8 name=/runtime.v1.RuntimeService/ListContainers
	Aug 05 23:27:56 ha-044175 crio[3920]: time="2024-08-05 23:27:56.220166670Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:fd7e94739a082d7384a7b998066384667954ebe9cc11847395a104db1a104317,PodSandboxId:77ac7fe6a83e0516a216fd1d55d638ed87cfcdf5723e5e28856ee5df04b14760,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:4,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1722900187738951526,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d30d1a5b-cfbe-4de6-a964-75c32e5dbf62,},Annotations:map[string]string{io.kubernetes.container.hash: 4378961a,io.kubernetes.container.restartCount: 4,io.kubernetes.container.t
erminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6528b925e75da18994cd673a201712eb241eeff865202c130034f40f0a350bb8,PodSandboxId:0f530473c6518daba2504d48da181c58689c44ffd19685987529bd79bbfdd8bf,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,State:CONTAINER_RUNNING,CreatedAt:1722900169724492024,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-044175,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: de889c914a63f88b5552d92d7c04005b,},Annotations:map[string]string{io.kubernetes.container.hash: bcb4918f,io.kubernetes.container.restartCount: 2,io.kubernetes.conta
iner.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7cf9b7cb63859c9cfe968fc20b9dacecfc681905714bc14a19a78ba20314f787,PodSandboxId:f6231b23266daa7beda5c2eb7b84162e5fe7c14db8b3c9ddcd78304bf2ec722c,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:3,},Image:&ImageSpec{Image:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,State:CONTAINER_RUNNING,CreatedAt:1722900162729981582,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-044175,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5280d6dbae40883a34349dd31a13a779,},Annotations:map[string]string{io.kubernetes.container.hash: bd2d1b8f,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessa
gePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:77b449aa0776d43116dbd794f0249b6e5fc5d747d7f6a8bc9604aebafc20ba74,PodSandboxId:2ff7308f4be3e77295c107b65333964734b52e07163e7f28b5c122b5225d1d4d,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1722900156038540995,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-fc5497c4f-wmfql,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: bfc8bad7-d43d-4beb-991e-339a4ce96ab5,},Annotations:map[string]string{io.kubernetes.container.hash: fc00d50e,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kube
rnetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:91055df10dc934cc3b2614f239bef7e465aa9809f34bba79c6de90604d74f7ca,PodSandboxId:68d4fc648e15948920c68a4aad97654ab8e34af2ae6e4e2ecdd3c173abf8148d,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:0,},Image:&ImageSpec{Image:38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12,State:CONTAINER_RUNNING,CreatedAt:1722900137200072773,Labels:map[string]string{io.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-044175,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: fd673cb8fe1efcc8b643555b76eaad93,},Annotations:map[string]string{io.kubernetes.container.hash: 3d55e6dc,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePoli
cy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a11b5de6fd020c3af228be69825e370ecef21ab78d774519dac722cf721bb6e6,PodSandboxId:3f0c789e63c6b8da2eaddf246bf22fac58253370f7977c637db0653e6efb8ad4,Metadata:&ContainerMetadata{Name:coredns,Attempt:2,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1722900124470501686,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-g9bml,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: fd474413-e416-48db-a7bf-f3c40675819b,},Annotations:map[string]string{io.kubernetes.container.hash: 1bd67db4,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"cont
ainerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:224c4967d5e92ceb088df11f70040bbd62d3bf073b04182cb32278b2db2419b1,PodSandboxId:77ac7fe6a83e0516a216fd1d55d638ed87cfcdf5723e5e28856ee5df04b14760,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:3,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1722900122857826642,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d30d1a5b-cfbe-4de6-a964-75c32e5dbf62,},Annotations:map[string]string{io.kubernete
s.container.hash: 4378961a,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:97768d7c5371dd0c06071b82c8baadd28ee604281812facf0dbd4a723ea92274,PodSandboxId:b949dd01383277f7e3efd577b7b6302bc9888e365a2106061d4a3a3119168a36,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:1,},Image:&ImageSpec{Image:917d7814b9b5b870a04ef7bbee5414485a7ba8385be9c26e1df27463f6fb6557,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:917d7814b9b5b870a04ef7bbee5414485a7ba8385be9c26e1df27463f6fb6557,State:CONTAINER_RUNNING,CreatedAt:1722900122962106602,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-xqx4z,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8455705e-b140-4f1e-abff-6a71bbb5415f,},Annotations:map[string]string{io.kubernetes.container.hash: 4a9283b6,io.kube
rnetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8aecc482892c69f412b19a67ecbfb961e4799ff113afee62cf254d8accc9e43a,PodSandboxId:e82fdb05fd230a5ff78128ae533e9617633b9f37f9a0671378ee9706bc2188c5,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1722900122848074225,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-vzhst,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f9c09745-be29-4403-9e7d-f9e4eaae5cac,},Annotations:map[string]string{io.kubernetes.container.hash: 1a8c310a,io.kubernetes.container.ports: [{\"nam
e\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:da62836e55aaaf8eee39a34113a3d41ba6489986d26134bed80020f8c7164507,PodSandboxId:5d40c713023d2ce8f1fd3f024181a8566c041373be34fbbdd28a7966391af628,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1722900122740850436,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-044175,io.kubernetes.pod.namespace: ku
be-system,io.kubernetes.pod.uid: 47fd3d59fe4024c671f4b57dbae12a83,},Annotations:map[string]string{io.kubernetes.container.hash: fa9a7bc3,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5f43d5e7445c285e5783c937039be219df8aaea8c9db899259f8d24c895a378c,PodSandboxId:1e5c99969ac60dcfb40f80c63b009c0e6efc07de9fccdd5c48b9097ed4f8bf63,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,State:CONTAINER_RUNNING,CreatedAt:1722900122542678745,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-vj5sd,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d6c9
cdcb-e1b7-44c8-a6e3-5e5aeb76ba03,},Annotations:map[string]string{io.kubernetes.container.hash: a40979c9,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:dd436770dad332628ad6a3b7fea663d52dda62901d07f6c1bfa5cf82ddae4f61,PodSandboxId:0f530473c6518daba2504d48da181c58689c44ffd19685987529bd79bbfdd8bf,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,State:CONTAINER_EXITED,CreatedAt:1722900122697717291,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-044175,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.
uid: de889c914a63f88b5552d92d7c04005b,},Annotations:map[string]string{io.kubernetes.container.hash: bcb4918f,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:95d6da5b264d99c2ae66291b9df0943d6f8ac4b1743a5bef2caebaaa9fa1694c,PodSandboxId:f6231b23266daa7beda5c2eb7b84162e5fe7c14db8b3c9ddcd78304bf2ec722c,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,State:CONTAINER_EXITED,CreatedAt:1722900122673726758,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-044175,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5280d6dbae40883a3
4349dd31a13a779,},Annotations:map[string]string{io.kubernetes.container.hash: bd2d1b8f,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5537b3a8dbcb27d26dc336a48652fdd3385ec0fb3b5169e72e472a665bc2e3ed,PodSandboxId:0b1220acf56ca1985bed119e03dfdc76cb09d54439c45a2488b7b06933c1f3be,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,State:CONTAINER_RUNNING,CreatedAt:1722900122644546831,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-044175,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 87091e6c521c934e57911d0cd84fc454,},Ann
otations:map[string]string{io.kubernetes.container.hash: 7337c8d9,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d46bdf5c93d9a335000c2d92e3814610ae1e74850c28c7ec832821e7ed10c1b6,PodSandboxId:212b1287cb785d37bec039a02eceff99c8d4258dd1905092b47149fba9f31b8e,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1722900103405605088,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-g9bml,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: fd474413-e416-48db-a7bf-f3c40675819b,},Annotations:map[string]string{io.ku
bernetes.container.hash: 1bd67db4,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:14f7140ac408890dd788c7a9d6a9857531edad86ff751157ac035e6ab0d4afdc,PodSandboxId:1bf94d816bd6b0f9325f20c0b2453330291a5dfa79448419ddd925a97f951bb9,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_EXITED,CreatedAt:1722899618925272516,Labels:map[string]str
ing{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-fc5497c4f-wmfql,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: bfc8bad7-d43d-4beb-991e-339a4ce96ab5,},Annotations:map[string]string{io.kubernetes.container.hash: fc00d50e,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e65205c398221a15eecea1ec1092d54f364a44886b05149400c7be5ffafc3285,PodSandboxId:0df1c00cbbb9d6891997d631537dd7662e552d8dca3cea20f0b653ed34f6f7bb,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1722899473822035995,Labels:map[string]string{io.kubernetes.container.name: cor
edns,io.kubernetes.pod.name: coredns-7db6d8ff4d-vzhst,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f9c09745-be29-4403-9e7d-f9e4eaae5cac,},Annotations:map[string]string{io.kubernetes.container.hash: 1a8c310a,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:97fa319bea82614cab7525f9052bcc8a09fad765b260045dbf0d0fa0ca0290b2,PodSandboxId:4f369251bc6de76b6eba2d8a6404cb53a6bcba17f58bd09854de9edd65d080fa,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:docker.io/kindest/kindnetd@sha256:4067b91686869e19bac601aec305ba55d2e74cdcb91347869bfb4fd3a26cd3c3,Annotations:map[string]
string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:917d7814b9b5b870a04ef7bbee5414485a7ba8385be9c26e1df27463f6fb6557,State:CONTAINER_EXITED,CreatedAt:1722899461696983959,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-xqx4z,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8455705e-b140-4f1e-abff-6a71bbb5415f,},Annotations:map[string]string{io.kubernetes.container.hash: 4a9283b6,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:04c382fd4a32fe8685a6f643ecf7a291e4d542c2223975f9df92991fe566b12a,PodSandboxId:b7b77d3f5c8a24f9906eb41c479b7254cd21f7c4d0c34b7014bdfa5f666df829,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,Annotations:map[string]string{},UserSpecifiedImage:,Runtime
Handler:,},ImageRef:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,State:CONTAINER_EXITED,CreatedAt:1722899457757352731,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-vj5sd,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d6c9cdcb-e1b7-44c8-a6e3-5e5aeb76ba03,},Annotations:map[string]string{io.kubernetes.container.hash: a40979c9,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b0893967672c7dc591bbcf220e56601b8a46fc11f07e63adbadaddec59ec1803,PodSandboxId:c7f5da3aca5fb3bac198b9144677aac33c3f5317946dad29f46e726a35d2c596,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f06
2788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_EXITED,CreatedAt:1722899438287916526,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-044175,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 47fd3d59fe4024c671f4b57dbae12a83,},Annotations:map[string]string{io.kubernetes.container.hash: fa9a7bc3,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2a85f2254a23cdec7e89ff8de2e31b06ddf2853808330965760217f1fd834004,PodSandboxId:57dd6eb50740256e4db3c59d0c1d850b0ba784d01abbeb7f8ea139160576fc43,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795
e2,State:CONTAINER_EXITED,CreatedAt:1722899438266931166,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-044175,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 87091e6c521c934e57911d0cd84fc454,},Annotations:map[string]string{io.kubernetes.container.hash: 7337c8d9,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=a9af1568-23a4-4051-8ece-4ff56f4847f8 name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                 CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	fd7e94739a082       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                      4 minutes ago       Running             storage-provisioner       4                   77ac7fe6a83e0       storage-provisioner
	6528b925e75da       76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e                                      5 minutes ago       Running             kube-controller-manager   2                   0f530473c6518       kube-controller-manager-ha-044175
	7cf9b7cb63859       1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d                                      5 minutes ago       Running             kube-apiserver            3                   f6231b23266da       kube-apiserver-ha-044175
	77b449aa0776d       8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a                                      5 minutes ago       Running             busybox                   1                   2ff7308f4be3e       busybox-fc5497c4f-wmfql
	91055df10dc93       38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12                                      5 minutes ago       Running             kube-vip                  0                   68d4fc648e159       kube-vip-ha-044175
	a11b5de6fd020       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4                                      5 minutes ago       Running             coredns                   2                   3f0c789e63c6b       coredns-7db6d8ff4d-g9bml
	97768d7c5371d       917d7814b9b5b870a04ef7bbee5414485a7ba8385be9c26e1df27463f6fb6557                                      5 minutes ago       Running             kindnet-cni               1                   b949dd0138327       kindnet-xqx4z
	224c4967d5e92       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                      5 minutes ago       Exited              storage-provisioner       3                   77ac7fe6a83e0       storage-provisioner
	8aecc482892c6       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4                                      5 minutes ago       Running             coredns                   1                   e82fdb05fd230       coredns-7db6d8ff4d-vzhst
	da62836e55aaa       3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899                                      5 minutes ago       Running             etcd                      1                   5d40c713023d2       etcd-ha-044175
	dd436770dad33       76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e                                      5 minutes ago       Exited              kube-controller-manager   1                   0f530473c6518       kube-controller-manager-ha-044175
	95d6da5b264d9       1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d                                      5 minutes ago       Exited              kube-apiserver            2                   f6231b23266da       kube-apiserver-ha-044175
	5537b3a8dbcb2       3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2                                      5 minutes ago       Running             kube-scheduler            1                   0b1220acf56ca       kube-scheduler-ha-044175
	5f43d5e7445c2       55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1                                      5 minutes ago       Running             kube-proxy                1                   1e5c99969ac60       kube-proxy-vj5sd
	d46bdf5c93d9a       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4                                      6 minutes ago       Exited              coredns                   1                   212b1287cb785       coredns-7db6d8ff4d-g9bml
	14f7140ac4088       gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335   14 minutes ago      Exited              busybox                   0                   1bf94d816bd6b       busybox-fc5497c4f-wmfql
	e65205c398221       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4                                      16 minutes ago      Exited              coredns                   0                   0df1c00cbbb9d       coredns-7db6d8ff4d-vzhst
	97fa319bea826       docker.io/kindest/kindnetd@sha256:4067b91686869e19bac601aec305ba55d2e74cdcb91347869bfb4fd3a26cd3c3    16 minutes ago      Exited              kindnet-cni               0                   4f369251bc6de       kindnet-xqx4z
	04c382fd4a32f       55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1                                      16 minutes ago      Exited              kube-proxy                0                   b7b77d3f5c8a2       kube-proxy-vj5sd
	b0893967672c7       3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899                                      17 minutes ago      Exited              etcd                      0                   c7f5da3aca5fb       etcd-ha-044175
	2a85f2254a23c       3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2                                      17 minutes ago      Exited              kube-scheduler            0                   57dd6eb507402       kube-scheduler-ha-044175
	
	
	==> coredns [8aecc482892c69f412b19a67ecbfb961e4799ff113afee62cf254d8accc9e43a] <==
	Trace[448270504]: [10.796676678s] [10.796676678s] END
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused - error from a previous attempt: read tcp 10.244.0.6:44896->10.96.0.1:443: read: connection reset by peer
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host - error from a previous attempt: read tcp 10.244.0.6:46242->10.96.0.1:443: read: connection reset by peer
	[INFO] plugin/kubernetes: Trace[1977882846]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231 (05-Aug-2024 23:22:17.552) (total time: 10490ms):
	Trace[1977882846]: ---"Objects listed" error:Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host - error from a previous attempt: read tcp 10.244.0.6:46242->10.96.0.1:443: read: connection reset by peer 10490ms (23:22:28.043)
	Trace[1977882846]: [10.49098567s] [10.49098567s] END
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host - error from a previous attempt: read tcp 10.244.0.6:46242->10.96.0.1:443: read: connection reset by peer
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	
	
	==> coredns [a11b5de6fd020c3af228be69825e370ecef21ab78d774519dac722cf721bb6e6] <==
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused - error from a previous attempt: read tcp 10.244.0.7:47698->10.96.0.1:443: read: connection reset by peer
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused - error from a previous attempt: read tcp 10.244.0.7:47698->10.96.0.1:443: read: connection reset by peer
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	
	
	==> coredns [d46bdf5c93d9a335000c2d92e3814610ae1e74850c28c7ec832821e7ed10c1b6] <==
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 6c8bd46af3d98e03c4ae8e438c65dd0c69a5f817565481bcf1725dd66ff794963b7938c81e3a23d4c2ad9e52f818076e819219c79e8007dd90564767ed68ba4c
	CoreDNS-1.11.1
	linux/amd64, go1.20.7, ae2bbc2
	[INFO] plugin/health: Going into lameduck mode for 5s
	[INFO] 127.0.0.1:59273 - 2512 "HINFO IN 3207962830486949060.9184539038836446459. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.014088898s
	
	
	==> coredns [e65205c398221a15eecea1ec1092d54f364a44886b05149400c7be5ffafc3285] <==
	[INFO] 10.244.2.2:56153 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.00013893s
	[INFO] 10.244.1.2:33342 - 3 "AAAA IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.001850863s
	[INFO] 10.244.1.2:42287 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000148733s
	[INFO] 10.244.1.2:54735 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000100517s
	[INFO] 10.244.1.2:59789 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.001317452s
	[INFO] 10.244.0.4:40404 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000074048s
	[INFO] 10.244.0.4:48828 - 3 "AAAA IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.002066965s
	[INFO] 10.244.0.4:45447 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000152682s
	[INFO] 10.244.2.2:44344 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000146254s
	[INFO] 10.244.2.2:44960 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000197937s
	[INFO] 10.244.1.2:46098 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000107825s
	[INFO] 10.244.0.4:53114 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000104641s
	[INFO] 10.244.0.4:55920 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000073557s
	[INFO] 10.244.2.2:36832 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.0001192s
	[INFO] 10.244.2.2:36836 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.00014154s
	[INFO] 10.244.1.2:35009 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.00021099s
	[INFO] 10.244.1.2:49630 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.00009192s
	[INFO] 10.244.1.2:49164 - 5 "PTR IN 1.39.168.192.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.000128354s
	[INFO] 10.244.0.4:33938 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000080255s
	[INFO] 10.244.0.4:34551 - 5 "PTR IN 1.39.168.192.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.000092007s
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.EndpointSlice: the server has asked for the client to provide credentials (get endpointslices.discovery.k8s.io) - error from a previous attempt: http2: server sent GOAWAY and closed the connection; LastStreamID=21, ErrCode=NO_ERROR, debug=""
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Service: the server has asked for the client to provide credentials (get services) - error from a previous attempt: http2: server sent GOAWAY and closed the connection; LastStreamID=21, ErrCode=NO_ERROR, debug=""
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Namespace: the server has asked for the client to provide credentials (get namespaces) - error from a previous attempt: http2: server sent GOAWAY and closed the connection; LastStreamID=21, ErrCode=NO_ERROR, debug=""
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/health: Going into lameduck mode for 5s
	
	
	==> describe nodes <==
	Name:               ha-044175
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-044175
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=a179f531dd2dbe55e0d6074abcbc378280f91bb4
	                    minikube.k8s.io/name=ha-044175
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_08_05T23_10_45_0700
	                    minikube.k8s.io/version=v1.33.1
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 05 Aug 2024 23:10:44 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-044175
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 05 Aug 2024 23:27:52 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 05 Aug 2024 23:27:50 +0000   Mon, 05 Aug 2024 23:10:44 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 05 Aug 2024 23:27:50 +0000   Mon, 05 Aug 2024 23:10:44 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 05 Aug 2024 23:27:50 +0000   Mon, 05 Aug 2024 23:10:44 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 05 Aug 2024 23:27:50 +0000   Mon, 05 Aug 2024 23:11:13 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.57
	  Hostname:    ha-044175
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 a7535c9f09f54963b658b49234079761
	  System UUID:                a7535c9f-09f5-4963-b658-b49234079761
	  Boot ID:                    97ae6699-97e9-4260-9f54-aa4546b6e1f0
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.30.3
	  Kube-Proxy Version:         v1.30.3
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (11 in total)
	  Namespace                   Name                                 CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                 ------------  ----------  ---------------  -------------  ---
	  default                     busybox-fc5497c4f-wmfql              0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         14m
	  kube-system                 coredns-7db6d8ff4d-g9bml             100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     16m
	  kube-system                 coredns-7db6d8ff4d-vzhst             100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     16m
	  kube-system                 etcd-ha-044175                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (4%!)(MISSING)       0 (0%!)(MISSING)         17m
	  kube-system                 kindnet-xqx4z                        100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (2%!)(MISSING)        50Mi (2%!)(MISSING)      16m
	  kube-system                 kube-apiserver-ha-044175             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         17m
	  kube-system                 kube-controller-manager-ha-044175    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         17m
	  kube-system                 kube-proxy-vj5sd                     0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         16m
	  kube-system                 kube-scheduler-ha-044175             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         17m
	  kube-system                 kube-vip-ha-044175                   0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m22s
	  kube-system                 storage-provisioner                  0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         16m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                950m (47%!)(MISSING)   100m (5%!)(MISSING)
	  memory             290Mi (13%!)(MISSING)  390Mi (18%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)       0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)       0 (0%!)(MISSING)
	Events:
	  Type     Reason                   Age                    From             Message
	  ----     ------                   ----                   ----             -------
	  Normal   Starting                 16m                    kube-proxy       
	  Normal   Starting                 5m11s                  kube-proxy       
	  Normal   NodeHasNoDiskPressure    17m                    kubelet          Node ha-044175 status is now: NodeHasNoDiskPressure
	  Normal   Starting                 17m                    kubelet          Starting kubelet.
	  Normal   NodeAllocatableEnforced  17m                    kubelet          Updated Node Allocatable limit across pods
	  Normal   NodeHasSufficientMemory  17m                    kubelet          Node ha-044175 status is now: NodeHasSufficientMemory
	  Normal   NodeHasSufficientPID     17m                    kubelet          Node ha-044175 status is now: NodeHasSufficientPID
	  Normal   RegisteredNode           17m                    node-controller  Node ha-044175 event: Registered Node ha-044175 in Controller
	  Normal   NodeReady                16m                    kubelet          Node ha-044175 status is now: NodeReady
	  Normal   RegisteredNode           15m                    node-controller  Node ha-044175 event: Registered Node ha-044175 in Controller
	  Normal   RegisteredNode           14m                    node-controller  Node ha-044175 event: Registered Node ha-044175 in Controller
	  Warning  ContainerGCFailed        6m12s (x2 over 7m12s)  kubelet          rpc error: code = Unavailable desc = connection error: desc = "transport: Error while dialing: dial unix /var/run/crio/crio.sock: connect: no such file or directory"
	  Normal   RegisteredNode           5m7s                   node-controller  Node ha-044175 event: Registered Node ha-044175 in Controller
	  Normal   RegisteredNode           4m55s                  node-controller  Node ha-044175 event: Registered Node ha-044175 in Controller
	  Normal   RegisteredNode           3m11s                  node-controller  Node ha-044175 event: Registered Node ha-044175 in Controller
	
	
	Name:               ha-044175-m02
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-044175-m02
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=a179f531dd2dbe55e0d6074abcbc378280f91bb4
	                    minikube.k8s.io/name=ha-044175
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_08_05T23_11_52_0700
	                    minikube.k8s.io/version=v1.33.1
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 05 Aug 2024 23:11:49 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-044175-m02
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 05 Aug 2024 23:27:52 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 05 Aug 2024 23:23:27 +0000   Mon, 05 Aug 2024 23:22:46 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 05 Aug 2024 23:23:27 +0000   Mon, 05 Aug 2024 23:22:46 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 05 Aug 2024 23:23:27 +0000   Mon, 05 Aug 2024 23:22:46 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 05 Aug 2024 23:23:27 +0000   Mon, 05 Aug 2024 23:22:46 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.112
	  Hostname:    ha-044175-m02
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 3b8a8f60868345a4bc1ba1393dbdecaf
	  System UUID:                3b8a8f60-8683-45a4-bc1b-a1393dbdecaf
	  Boot ID:                    71e8903c-f0e2-496b-815e-23868eec6c11
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.30.3
	  Kube-Proxy Version:         v1.30.3
	PodCIDR:                      10.244.1.0/24
	PodCIDRs:                     10.244.1.0/24
	Non-terminated Pods:          (8 in total)
	  Namespace                   Name                                     CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                     ------------  ----------  ---------------  -------------  ---
	  default                     busybox-fc5497c4f-tpqpw                  0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         14m
	  kube-system                 etcd-ha-044175-m02                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (4%!)(MISSING)       0 (0%!)(MISSING)         16m
	  kube-system                 kindnet-hqhgc                            100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (2%!)(MISSING)        50Mi (2%!)(MISSING)      16m
	  kube-system                 kube-apiserver-ha-044175-m02             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         16m
	  kube-system                 kube-controller-manager-ha-044175-m02    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         16m
	  kube-system                 kube-proxy-jfs9q                         0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         16m
	  kube-system                 kube-scheduler-ha-044175-m02             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         16m
	  kube-system                 kube-vip-ha-044175-m02                   0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         16m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%!)(MISSING)  100m (5%!)(MISSING)
	  memory             150Mi (7%!)(MISSING)  50Mi (2%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                    From             Message
	  ----    ------                   ----                   ----             -------
	  Normal  Starting                 5m8s                   kube-proxy       
	  Normal  Starting                 16m                    kube-proxy       
	  Normal  NodeAllocatableEnforced  16m                    kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  16m (x8 over 16m)      kubelet          Node ha-044175-m02 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    16m (x8 over 16m)      kubelet          Node ha-044175-m02 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     16m (x7 over 16m)      kubelet          Node ha-044175-m02 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           16m                    node-controller  Node ha-044175-m02 event: Registered Node ha-044175-m02 in Controller
	  Normal  RegisteredNode           15m                    node-controller  Node ha-044175-m02 event: Registered Node ha-044175-m02 in Controller
	  Normal  RegisteredNode           14m                    node-controller  Node ha-044175-m02 event: Registered Node ha-044175-m02 in Controller
	  Normal  NodeNotReady             12m                    node-controller  Node ha-044175-m02 status is now: NodeNotReady
	  Normal  Starting                 5m38s                  kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  5m38s (x8 over 5m38s)  kubelet          Node ha-044175-m02 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    5m38s (x8 over 5m38s)  kubelet          Node ha-044175-m02 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     5m38s (x7 over 5m38s)  kubelet          Node ha-044175-m02 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  5m38s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           5m7s                   node-controller  Node ha-044175-m02 event: Registered Node ha-044175-m02 in Controller
	  Normal  RegisteredNode           4m55s                  node-controller  Node ha-044175-m02 event: Registered Node ha-044175-m02 in Controller
	  Normal  RegisteredNode           3m11s                  node-controller  Node ha-044175-m02 event: Registered Node ha-044175-m02 in Controller
	
	
	Name:               ha-044175-m04
	Roles:              <none>
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-044175-m04
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=a179f531dd2dbe55e0d6074abcbc378280f91bb4
	                    minikube.k8s.io/name=ha-044175
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_08_05T23_14_14_0700
	                    minikube.k8s.io/version=v1.33.1
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 05 Aug 2024 23:14:13 +0000
	Taints:             node.kubernetes.io/unreachable:NoExecute
	                    node.kubernetes.io/unreachable:NoSchedule
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-044175-m04
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 05 Aug 2024 23:25:29 +0000
	Conditions:
	  Type             Status    LastHeartbeatTime                 LastTransitionTime                Reason              Message
	  ----             ------    -----------------                 ------------------                ------              -------
	  MemoryPressure   Unknown   Mon, 05 Aug 2024 23:25:09 +0000   Mon, 05 Aug 2024 23:26:10 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  DiskPressure     Unknown   Mon, 05 Aug 2024 23:25:09 +0000   Mon, 05 Aug 2024 23:26:10 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  PIDPressure      Unknown   Mon, 05 Aug 2024 23:25:09 +0000   Mon, 05 Aug 2024 23:26:10 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  Ready            Unknown   Mon, 05 Aug 2024 23:25:09 +0000   Mon, 05 Aug 2024 23:26:10 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	Addresses:
	  InternalIP:  192.168.39.228
	  Hostname:    ha-044175-m04
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 0d2536a5615e49c8bf2cb4a8d6f85b2f
	  System UUID:                0d2536a5-615e-49c8-bf2c-b4a8d6f85b2f
	  Boot ID:                    d840370c-d54f-402e-9d08-4c3e0708b35d
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.30.3
	  Kube-Proxy Version:         v1.30.3
	PodCIDR:                      10.244.3.0/24
	PodCIDRs:                     10.244.3.0/24
	Non-terminated Pods:          (3 in total)
	  Namespace                   Name                       CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                       ------------  ----------  ---------------  -------------  ---
	  default                     busybox-fc5497c4f-sf69j    0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         2m37s
	  kube-system                 kindnet-2rpdm              100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (2%!)(MISSING)        50Mi (2%!)(MISSING)      13m
	  kube-system                 kube-proxy-r5567           0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         13m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests   Limits
	  --------           --------   ------
	  cpu                100m (5%!)(MISSING)  100m (5%!)(MISSING)
	  memory             50Mi (2%!)(MISSING)  50Mi (2%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)     0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)     0 (0%!)(MISSING)
	Events:
	  Type     Reason                   Age                    From             Message
	  ----     ------                   ----                   ----             -------
	  Normal   Starting                 2m44s                  kube-proxy       
	  Normal   Starting                 13m                    kube-proxy       
	  Normal   NodeAllocatableEnforced  13m                    kubelet          Updated Node Allocatable limit across pods
	  Normal   NodeHasSufficientMemory  13m (x2 over 13m)      kubelet          Node ha-044175-m04 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    13m (x2 over 13m)      kubelet          Node ha-044175-m04 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     13m (x2 over 13m)      kubelet          Node ha-044175-m04 status is now: NodeHasSufficientPID
	  Normal   RegisteredNode           13m                    node-controller  Node ha-044175-m04 event: Registered Node ha-044175-m04 in Controller
	  Normal   RegisteredNode           13m                    node-controller  Node ha-044175-m04 event: Registered Node ha-044175-m04 in Controller
	  Normal   RegisteredNode           13m                    node-controller  Node ha-044175-m04 event: Registered Node ha-044175-m04 in Controller
	  Normal   NodeReady                13m                    kubelet          Node ha-044175-m04 status is now: NodeReady
	  Normal   RegisteredNode           5m7s                   node-controller  Node ha-044175-m04 event: Registered Node ha-044175-m04 in Controller
	  Normal   RegisteredNode           4m55s                  node-controller  Node ha-044175-m04 event: Registered Node ha-044175-m04 in Controller
	  Normal   RegisteredNode           3m11s                  node-controller  Node ha-044175-m04 event: Registered Node ha-044175-m04 in Controller
	  Normal   Starting                 2m48s                  kubelet          Starting kubelet.
	  Normal   NodeAllocatableEnforced  2m48s                  kubelet          Updated Node Allocatable limit across pods
	  Normal   NodeHasSufficientMemory  2m47s (x3 over 2m47s)  kubelet          Node ha-044175-m04 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    2m47s (x3 over 2m47s)  kubelet          Node ha-044175-m04 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     2m47s (x3 over 2m47s)  kubelet          Node ha-044175-m04 status is now: NodeHasSufficientPID
	  Warning  Rebooted                 2m47s (x2 over 2m47s)  kubelet          Node ha-044175-m04 has been rebooted, boot id: d840370c-d54f-402e-9d08-4c3e0708b35d
	  Normal   NodeReady                2m47s (x2 over 2m47s)  kubelet          Node ha-044175-m04 status is now: NodeReady
	  Normal   NodeNotReady             106s (x2 over 4m27s)   node-controller  Node ha-044175-m04 status is now: NodeNotReady
	
	
	==> dmesg <==
	[  +0.066481] systemd-fstab-generator[613]: Ignoring "noauto" option for root device
	[  +0.165121] systemd-fstab-generator[627]: Ignoring "noauto" option for root device
	[  +0.129651] systemd-fstab-generator[639]: Ignoring "noauto" option for root device
	[  +0.275605] systemd-fstab-generator[668]: Ignoring "noauto" option for root device
	[  +4.344469] systemd-fstab-generator[777]: Ignoring "noauto" option for root device
	[  +0.058179] kauditd_printk_skb: 130 callbacks suppressed
	[  +4.730128] systemd-fstab-generator[959]: Ignoring "noauto" option for root device
	[  +0.903161] kauditd_printk_skb: 46 callbacks suppressed
	[  +6.792303] systemd-fstab-generator[1383]: Ignoring "noauto" option for root device
	[  +0.087803] kauditd_printk_skb: 51 callbacks suppressed
	[ +13.188886] kauditd_printk_skb: 21 callbacks suppressed
	[Aug 5 23:11] kauditd_printk_skb: 35 callbacks suppressed
	[ +53.752834] kauditd_printk_skb: 24 callbacks suppressed
	[Aug 5 23:18] kauditd_printk_skb: 1 callbacks suppressed
	[Aug 5 23:21] systemd-fstab-generator[3834]: Ignoring "noauto" option for root device
	[  +0.151240] systemd-fstab-generator[3846]: Ignoring "noauto" option for root device
	[  +0.184946] systemd-fstab-generator[3860]: Ignoring "noauto" option for root device
	[  +0.146188] systemd-fstab-generator[3872]: Ignoring "noauto" option for root device
	[  +0.302569] systemd-fstab-generator[3900]: Ignoring "noauto" option for root device
	[  +9.901517] systemd-fstab-generator[4031]: Ignoring "noauto" option for root device
	[  +0.087787] kauditd_printk_skb: 110 callbacks suppressed
	[Aug 5 23:22] kauditd_printk_skb: 12 callbacks suppressed
	[ +12.414953] kauditd_printk_skb: 86 callbacks suppressed
	[ +10.059420] kauditd_printk_skb: 1 callbacks suppressed
	[ +25.020068] kauditd_printk_skb: 3 callbacks suppressed
	
	
	==> etcd [b0893967672c7dc591bbcf220e56601b8a46fc11f07e63adbadaddec59ec1803] <==
	2024/08/05 23:20:12 WARNING: [core] [Server #8] grpc: Server.processUnaryRPC failed to write status: connection error: desc = "transport is closing"
	2024/08/05 23:20:12 WARNING: [core] [Server #8] grpc: Server.processUnaryRPC failed to write status: connection error: desc = "transport is closing"
	2024/08/05 23:20:12 WARNING: [core] [Server #8] grpc: Server.processUnaryRPC failed to write status: connection error: desc = "transport is closing"
	2024/08/05 23:20:12 WARNING: [core] [Server #8] grpc: Server.processUnaryRPC failed to write status: connection error: desc = "transport is closing"
	{"level":"warn","ts":"2024-08-05T23:20:12.320974Z","caller":"etcdserver/v3_server.go:897","msg":"waiting for ReadIndex response took too long, retrying","sent-request-id":17815555288227144781,"retry-timeout":"500ms"}
	{"level":"warn","ts":"2024-08-05T23:20:12.360586Z","caller":"embed/serve.go:212","msg":"stopping secure grpc server due to error","error":"accept tcp 192.168.39.57:2379: use of closed network connection"}
	{"level":"warn","ts":"2024-08-05T23:20:12.360628Z","caller":"embed/serve.go:214","msg":"stopped secure grpc server due to error","error":"accept tcp 192.168.39.57:2379: use of closed network connection"}
	{"level":"info","ts":"2024-08-05T23:20:12.360684Z","caller":"etcdserver/server.go:1462","msg":"skipped leadership transfer; local server is not leader","local-member-id":"79ee2fa200dbf73d","current-leader-member-id":"0"}
	{"level":"info","ts":"2024-08-05T23:20:12.360878Z","caller":"rafthttp/peer.go:330","msg":"stopping remote peer","remote-peer-id":"74b01d9147cbb35"}
	{"level":"info","ts":"2024-08-05T23:20:12.360916Z","caller":"rafthttp/stream.go:294","msg":"stopped TCP streaming connection with remote peer","stream-writer-type":"stream MsgApp v2","remote-peer-id":"74b01d9147cbb35"}
	{"level":"info","ts":"2024-08-05T23:20:12.360962Z","caller":"rafthttp/stream.go:294","msg":"stopped TCP streaming connection with remote peer","stream-writer-type":"stream Message","remote-peer-id":"74b01d9147cbb35"}
	{"level":"info","ts":"2024-08-05T23:20:12.361042Z","caller":"rafthttp/pipeline.go:85","msg":"stopped HTTP pipelining with remote peer","local-member-id":"79ee2fa200dbf73d","remote-peer-id":"74b01d9147cbb35"}
	{"level":"info","ts":"2024-08-05T23:20:12.361093Z","caller":"rafthttp/stream.go:442","msg":"stopped stream reader with remote peer","stream-reader-type":"stream MsgApp v2","local-member-id":"79ee2fa200dbf73d","remote-peer-id":"74b01d9147cbb35"}
	{"level":"info","ts":"2024-08-05T23:20:12.361128Z","caller":"rafthttp/stream.go:442","msg":"stopped stream reader with remote peer","stream-reader-type":"stream Message","local-member-id":"79ee2fa200dbf73d","remote-peer-id":"74b01d9147cbb35"}
	{"level":"info","ts":"2024-08-05T23:20:12.361138Z","caller":"rafthttp/peer.go:335","msg":"stopped remote peer","remote-peer-id":"74b01d9147cbb35"}
	{"level":"info","ts":"2024-08-05T23:20:12.361143Z","caller":"rafthttp/peer.go:330","msg":"stopping remote peer","remote-peer-id":"64e36570e84f18f4"}
	{"level":"info","ts":"2024-08-05T23:20:12.361151Z","caller":"rafthttp/stream.go:294","msg":"stopped TCP streaming connection with remote peer","stream-writer-type":"stream MsgApp v2","remote-peer-id":"64e36570e84f18f4"}
	{"level":"info","ts":"2024-08-05T23:20:12.36117Z","caller":"rafthttp/stream.go:294","msg":"stopped TCP streaming connection with remote peer","stream-writer-type":"stream Message","remote-peer-id":"64e36570e84f18f4"}
	{"level":"info","ts":"2024-08-05T23:20:12.36126Z","caller":"rafthttp/pipeline.go:85","msg":"stopped HTTP pipelining with remote peer","local-member-id":"79ee2fa200dbf73d","remote-peer-id":"64e36570e84f18f4"}
	{"level":"info","ts":"2024-08-05T23:20:12.361303Z","caller":"rafthttp/stream.go:442","msg":"stopped stream reader with remote peer","stream-reader-type":"stream MsgApp v2","local-member-id":"79ee2fa200dbf73d","remote-peer-id":"64e36570e84f18f4"}
	{"level":"info","ts":"2024-08-05T23:20:12.361545Z","caller":"rafthttp/stream.go:442","msg":"stopped stream reader with remote peer","stream-reader-type":"stream Message","local-member-id":"79ee2fa200dbf73d","remote-peer-id":"64e36570e84f18f4"}
	{"level":"info","ts":"2024-08-05T23:20:12.361588Z","caller":"rafthttp/peer.go:335","msg":"stopped remote peer","remote-peer-id":"64e36570e84f18f4"}
	{"level":"info","ts":"2024-08-05T23:20:12.365179Z","caller":"embed/etcd.go:579","msg":"stopping serving peer traffic","address":"192.168.39.57:2380"}
	{"level":"info","ts":"2024-08-05T23:20:12.365321Z","caller":"embed/etcd.go:584","msg":"stopped serving peer traffic","address":"192.168.39.57:2380"}
	{"level":"info","ts":"2024-08-05T23:20:12.365347Z","caller":"embed/etcd.go:377","msg":"closed etcd server","name":"ha-044175","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.39.57:2380"],"advertise-client-urls":["https://192.168.39.57:2379"]}
	
	
	==> etcd [da62836e55aaaf8eee39a34113a3d41ba6489986d26134bed80020f8c7164507] <==
	{"level":"warn","ts":"2024-08-05T23:24:25.506695Z","caller":"etcdserver/cluster_util.go:158","msg":"failed to get version","remote-member-id":"64e36570e84f18f4","error":"Get \"https://192.168.39.201:2380/version\": dial tcp 192.168.39.201:2380: connect: connection refused"}
	{"level":"info","ts":"2024-08-05T23:24:26.280998Z","caller":"rafthttp/peer_status.go:53","msg":"peer became active","peer-id":"64e36570e84f18f4"}
	{"level":"info","ts":"2024-08-05T23:24:26.281184Z","caller":"rafthttp/stream.go:412","msg":"established TCP streaming connection with remote peer","stream-reader-type":"stream MsgApp v2","local-member-id":"79ee2fa200dbf73d","remote-peer-id":"64e36570e84f18f4"}
	{"level":"info","ts":"2024-08-05T23:24:26.281361Z","caller":"rafthttp/stream.go:412","msg":"established TCP streaming connection with remote peer","stream-reader-type":"stream Message","local-member-id":"79ee2fa200dbf73d","remote-peer-id":"64e36570e84f18f4"}
	{"level":"info","ts":"2024-08-05T23:24:26.30014Z","caller":"rafthttp/stream.go:249","msg":"set message encoder","from":"79ee2fa200dbf73d","to":"64e36570e84f18f4","stream-type":"stream Message"}
	{"level":"info","ts":"2024-08-05T23:24:26.300249Z","caller":"rafthttp/stream.go:274","msg":"established TCP streaming connection with remote peer","stream-writer-type":"stream Message","local-member-id":"79ee2fa200dbf73d","remote-peer-id":"64e36570e84f18f4"}
	{"level":"info","ts":"2024-08-05T23:24:26.373123Z","caller":"rafthttp/stream.go:249","msg":"set message encoder","from":"79ee2fa200dbf73d","to":"64e36570e84f18f4","stream-type":"stream MsgApp v2"}
	{"level":"info","ts":"2024-08-05T23:24:26.373191Z","caller":"rafthttp/stream.go:274","msg":"established TCP streaming connection with remote peer","stream-writer-type":"stream MsgApp v2","local-member-id":"79ee2fa200dbf73d","remote-peer-id":"64e36570e84f18f4"}
	{"level":"info","ts":"2024-08-05T23:25:22.48426Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"79ee2fa200dbf73d switched to configuration voters=(525515813382044469 8786012295892039485)"}
	{"level":"info","ts":"2024-08-05T23:25:22.488592Z","caller":"membership/cluster.go:472","msg":"removed member","cluster-id":"cdb6bc6ece496785","local-member-id":"79ee2fa200dbf73d","removed-remote-peer-id":"64e36570e84f18f4","removed-remote-peer-urls":["https://192.168.39.201:2380"]}
	{"level":"info","ts":"2024-08-05T23:25:22.488697Z","caller":"rafthttp/peer.go:330","msg":"stopping remote peer","remote-peer-id":"64e36570e84f18f4"}
	{"level":"warn","ts":"2024-08-05T23:25:22.488977Z","caller":"rafthttp/stream.go:286","msg":"closed TCP streaming connection with remote peer","stream-writer-type":"stream MsgApp v2","remote-peer-id":"64e36570e84f18f4"}
	{"level":"info","ts":"2024-08-05T23:25:22.489069Z","caller":"rafthttp/stream.go:294","msg":"stopped TCP streaming connection with remote peer","stream-writer-type":"stream MsgApp v2","remote-peer-id":"64e36570e84f18f4"}
	{"level":"warn","ts":"2024-08-05T23:25:22.489293Z","caller":"rafthttp/stream.go:286","msg":"closed TCP streaming connection with remote peer","stream-writer-type":"stream Message","remote-peer-id":"64e36570e84f18f4"}
	{"level":"info","ts":"2024-08-05T23:25:22.489333Z","caller":"rafthttp/stream.go:294","msg":"stopped TCP streaming connection with remote peer","stream-writer-type":"stream Message","remote-peer-id":"64e36570e84f18f4"}
	{"level":"info","ts":"2024-08-05T23:25:22.489495Z","caller":"rafthttp/pipeline.go:85","msg":"stopped HTTP pipelining with remote peer","local-member-id":"79ee2fa200dbf73d","remote-peer-id":"64e36570e84f18f4"}
	{"level":"warn","ts":"2024-08-05T23:25:22.489878Z","caller":"rafthttp/stream.go:421","msg":"lost TCP streaming connection with remote peer","stream-reader-type":"stream MsgApp v2","local-member-id":"79ee2fa200dbf73d","remote-peer-id":"64e36570e84f18f4","error":"context canceled"}
	{"level":"warn","ts":"2024-08-05T23:25:22.489989Z","caller":"rafthttp/peer_status.go:66","msg":"peer became inactive (message send to peer failed)","peer-id":"64e36570e84f18f4","error":"failed to read 64e36570e84f18f4 on stream MsgApp v2 (context canceled)"}
	{"level":"info","ts":"2024-08-05T23:25:22.490136Z","caller":"rafthttp/stream.go:442","msg":"stopped stream reader with remote peer","stream-reader-type":"stream MsgApp v2","local-member-id":"79ee2fa200dbf73d","remote-peer-id":"64e36570e84f18f4"}
	{"level":"warn","ts":"2024-08-05T23:25:22.491709Z","caller":"rafthttp/stream.go:421","msg":"lost TCP streaming connection with remote peer","stream-reader-type":"stream Message","local-member-id":"79ee2fa200dbf73d","remote-peer-id":"64e36570e84f18f4","error":"context canceled"}
	{"level":"info","ts":"2024-08-05T23:25:22.492209Z","caller":"rafthttp/stream.go:442","msg":"stopped stream reader with remote peer","stream-reader-type":"stream Message","local-member-id":"79ee2fa200dbf73d","remote-peer-id":"64e36570e84f18f4"}
	{"level":"info","ts":"2024-08-05T23:25:22.494547Z","caller":"rafthttp/peer.go:335","msg":"stopped remote peer","remote-peer-id":"64e36570e84f18f4"}
	{"level":"info","ts":"2024-08-05T23:25:22.494592Z","caller":"rafthttp/transport.go:355","msg":"removed remote peer","local-member-id":"79ee2fa200dbf73d","removed-remote-peer-id":"64e36570e84f18f4"}
	{"level":"warn","ts":"2024-08-05T23:25:22.515811Z","caller":"rafthttp/http.go:394","msg":"rejected stream from remote peer because it was removed","local-member-id":"79ee2fa200dbf73d","remote-peer-id-stream-handler":"79ee2fa200dbf73d","remote-peer-id-from":"64e36570e84f18f4"}
	{"level":"warn","ts":"2024-08-05T23:25:22.520668Z","caller":"rafthttp/http.go:394","msg":"rejected stream from remote peer because it was removed","local-member-id":"79ee2fa200dbf73d","remote-peer-id-stream-handler":"79ee2fa200dbf73d","remote-peer-id-from":"64e36570e84f18f4"}
	
	
	==> kernel <==
	 23:27:56 up 17 min,  0 users,  load average: 0.63, 0.60, 0.39
	Linux ha-044175 5.10.207 #1 SMP Mon Jul 29 15:19:02 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kindnet [97768d7c5371dd0c06071b82c8baadd28ee604281812facf0dbd4a723ea92274] <==
	I0805 23:27:14.214096       1 main.go:322] Node ha-044175-m04 has CIDR [10.244.3.0/24] 
	I0805 23:27:24.218788       1 main.go:295] Handling node with IPs: map[192.168.39.112:{}]
	I0805 23:27:24.218942       1 main.go:322] Node ha-044175-m02 has CIDR [10.244.1.0/24] 
	I0805 23:27:24.219147       1 main.go:295] Handling node with IPs: map[192.168.39.228:{}]
	I0805 23:27:24.219174       1 main.go:322] Node ha-044175-m04 has CIDR [10.244.3.0/24] 
	I0805 23:27:24.219298       1 main.go:295] Handling node with IPs: map[192.168.39.57:{}]
	I0805 23:27:24.219326       1 main.go:299] handling current node
	I0805 23:27:34.211072       1 main.go:295] Handling node with IPs: map[192.168.39.57:{}]
	I0805 23:27:34.211275       1 main.go:299] handling current node
	I0805 23:27:34.211304       1 main.go:295] Handling node with IPs: map[192.168.39.112:{}]
	I0805 23:27:34.211323       1 main.go:322] Node ha-044175-m02 has CIDR [10.244.1.0/24] 
	I0805 23:27:34.211547       1 main.go:295] Handling node with IPs: map[192.168.39.228:{}]
	I0805 23:27:34.211578       1 main.go:322] Node ha-044175-m04 has CIDR [10.244.3.0/24] 
	I0805 23:27:44.212218       1 main.go:295] Handling node with IPs: map[192.168.39.57:{}]
	I0805 23:27:44.212267       1 main.go:299] handling current node
	I0805 23:27:44.212281       1 main.go:295] Handling node with IPs: map[192.168.39.112:{}]
	I0805 23:27:44.212287       1 main.go:322] Node ha-044175-m02 has CIDR [10.244.1.0/24] 
	I0805 23:27:44.212465       1 main.go:295] Handling node with IPs: map[192.168.39.228:{}]
	I0805 23:27:44.212492       1 main.go:322] Node ha-044175-m04 has CIDR [10.244.3.0/24] 
	I0805 23:27:54.212483       1 main.go:295] Handling node with IPs: map[192.168.39.112:{}]
	I0805 23:27:54.212531       1 main.go:322] Node ha-044175-m02 has CIDR [10.244.1.0/24] 
	I0805 23:27:54.212680       1 main.go:295] Handling node with IPs: map[192.168.39.228:{}]
	I0805 23:27:54.212688       1 main.go:322] Node ha-044175-m04 has CIDR [10.244.3.0/24] 
	I0805 23:27:54.212750       1 main.go:295] Handling node with IPs: map[192.168.39.57:{}]
	I0805 23:27:54.212775       1 main.go:299] handling current node
	
	
	==> kindnet [97fa319bea82614cab7525f9052bcc8a09fad765b260045dbf0d0fa0ca0290b2] <==
	I0805 23:19:32.766825       1 main.go:322] Node ha-044175-m02 has CIDR [10.244.1.0/24] 
	I0805 23:19:42.758028       1 main.go:295] Handling node with IPs: map[192.168.39.57:{}]
	I0805 23:19:42.758135       1 main.go:299] handling current node
	I0805 23:19:42.758162       1 main.go:295] Handling node with IPs: map[192.168.39.112:{}]
	I0805 23:19:42.758181       1 main.go:322] Node ha-044175-m02 has CIDR [10.244.1.0/24] 
	I0805 23:19:42.758324       1 main.go:295] Handling node with IPs: map[192.168.39.201:{}]
	I0805 23:19:42.758344       1 main.go:322] Node ha-044175-m03 has CIDR [10.244.2.0/24] 
	I0805 23:19:42.758482       1 main.go:295] Handling node with IPs: map[192.168.39.228:{}]
	I0805 23:19:42.758508       1 main.go:322] Node ha-044175-m04 has CIDR [10.244.3.0/24] 
	I0805 23:19:52.757433       1 main.go:295] Handling node with IPs: map[192.168.39.57:{}]
	I0805 23:19:52.757482       1 main.go:299] handling current node
	I0805 23:19:52.757504       1 main.go:295] Handling node with IPs: map[192.168.39.112:{}]
	I0805 23:19:52.757509       1 main.go:322] Node ha-044175-m02 has CIDR [10.244.1.0/24] 
	I0805 23:19:52.757630       1 main.go:295] Handling node with IPs: map[192.168.39.201:{}]
	I0805 23:19:52.757675       1 main.go:322] Node ha-044175-m03 has CIDR [10.244.2.0/24] 
	I0805 23:19:52.757731       1 main.go:295] Handling node with IPs: map[192.168.39.228:{}]
	I0805 23:19:52.757753       1 main.go:322] Node ha-044175-m04 has CIDR [10.244.3.0/24] 
	I0805 23:20:02.757577       1 main.go:295] Handling node with IPs: map[192.168.39.228:{}]
	I0805 23:20:02.757603       1 main.go:322] Node ha-044175-m04 has CIDR [10.244.3.0/24] 
	I0805 23:20:02.757761       1 main.go:295] Handling node with IPs: map[192.168.39.57:{}]
	I0805 23:20:02.757788       1 main.go:299] handling current node
	I0805 23:20:02.757800       1 main.go:295] Handling node with IPs: map[192.168.39.112:{}]
	I0805 23:20:02.757805       1 main.go:322] Node ha-044175-m02 has CIDR [10.244.1.0/24] 
	I0805 23:20:02.757858       1 main.go:295] Handling node with IPs: map[192.168.39.201:{}]
	I0805 23:20:02.757879       1 main.go:322] Node ha-044175-m03 has CIDR [10.244.2.0/24] 
	
	
	==> kube-apiserver [7cf9b7cb63859c9cfe968fc20b9dacecfc681905714bc14a19a78ba20314f787] <==
	I0805 23:22:44.577987       1 dynamic_cafile_content.go:157] "Starting controller" name="request-header::/var/lib/minikube/certs/front-proxy-ca.crt"
	I0805 23:22:44.663216       1 shared_informer.go:320] Caches are synced for crd-autoregister
	I0805 23:22:44.667234       1 aggregator.go:165] initial CRD sync complete...
	I0805 23:22:44.667341       1 autoregister_controller.go:141] Starting autoregister controller
	I0805 23:22:44.667411       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I0805 23:22:44.693723       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I0805 23:22:44.701661       1 cache.go:39] Caches are synced for AvailableConditionController controller
	I0805 23:22:44.705836       1 handler_discovery.go:447] Starting ResourceDiscoveryManager
	I0805 23:22:44.720285       1 shared_informer.go:320] Caches are synced for node_authorizer
	I0805 23:22:44.730831       1 shared_informer.go:320] Caches are synced for *generic.policySource[*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicy,*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicyBinding,k8s.io/apiserver/pkg/admission/plugin/policy/validating.Validator]
	I0805 23:22:44.730866       1 policy_source.go:224] refreshing policies
	W0805 23:22:44.742162       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.168.39.112 192.168.39.201]
	I0805 23:22:44.744783       1 controller.go:615] quota admission added evaluator for: endpoints
	I0805 23:22:44.761335       1 shared_informer.go:320] Caches are synced for cluster_authentication_trust_controller
	I0805 23:22:44.761961       1 shared_informer.go:320] Caches are synced for configmaps
	I0805 23:22:44.762257       1 apf_controller.go:379] Running API Priority and Fairness config worker
	I0805 23:22:44.763051       1 apf_controller.go:382] Running API Priority and Fairness periodic rebalancing process
	I0805 23:22:44.769913       1 controller.go:615] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I0805 23:22:44.771243       1 cache.go:39] Caches are synced for autoregister controller
	E0805 23:22:44.780052       1 controller.go:95] Found stale data, removed previous endpoints on kubernetes service, apiserver didn't exit successfully previously
	I0805 23:22:44.801491       1 controller.go:615] quota admission added evaluator for: leases.coordination.k8s.io
	I0805 23:22:45.576245       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	W0805 23:22:46.037358       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.168.39.112 192.168.39.201 192.168.39.57]
	W0805 23:22:56.035495       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.168.39.112 192.168.39.57]
	W0805 23:25:36.044010       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.168.39.112 192.168.39.57]
	
	
	==> kube-apiserver [95d6da5b264d99c2ae66291b9df0943d6f8ac4b1743a5bef2caebaaa9fa1694c] <==
	I0805 23:22:03.308960       1 options.go:221] external host was not specified, using 192.168.39.57
	I0805 23:22:03.312215       1 server.go:148] Version: v1.30.3
	I0805 23:22:03.312262       1 server.go:150] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0805 23:22:04.226588       1 shared_informer.go:313] Waiting for caches to sync for node_authorizer
	I0805 23:22:04.237351       1 plugins.go:157] Loaded 12 mutating admission controller(s) successfully in the following order: NamespaceLifecycle,LimitRanger,ServiceAccount,NodeRestriction,TaintNodesByCondition,Priority,DefaultTolerationSeconds,DefaultStorageClass,StorageObjectInUseProtection,RuntimeClass,DefaultIngressClass,MutatingAdmissionWebhook.
	I0805 23:22:04.237527       1 plugins.go:160] Loaded 13 validating admission controller(s) successfully in the following order: LimitRanger,ServiceAccount,PodSecurity,Priority,PersistentVolumeClaimResize,RuntimeClass,CertificateApproval,CertificateSigning,ClusterTrustBundleAttest,CertificateSubjectRestriction,ValidatingAdmissionPolicy,ValidatingAdmissionWebhook,ResourceQuota.
	I0805 23:22:04.237764       1 instance.go:299] Using reconciler: lease
	I0805 23:22:04.239078       1 shared_informer.go:313] Waiting for caches to sync for *generic.policySource[*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicy,*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicyBinding,k8s.io/apiserver/pkg/admission/plugin/policy/validating.Validator]
	W0805 23:22:24.224714       1 logging.go:59] [core] [Channel #1 SubChannel #2] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: authentication handshake failed: context canceled"
	W0805 23:22:24.225529       1 logging.go:59] [core] [Channel #3 SubChannel #4] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: authentication handshake failed: context deadline exceeded"
	F0805 23:22:24.239123       1 instance.go:292] Error creating leases: error creating storage factory: context deadline exceeded
	
	
	==> kube-controller-manager [6528b925e75da18994cd673a201712eb241eeff865202c130034f40f0a350bb8] <==
	I0805 23:25:21.324677       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="196.359µs"
	I0805 23:25:21.796700       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="59.717µs"
	I0805 23:25:21.816741       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="54.134µs"
	I0805 23:25:21.823893       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="73.376µs"
	I0805 23:25:23.482877       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="11.802568ms"
	I0805 23:25:23.483027       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="46.826µs"
	I0805 23:25:34.109582       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="ha-044175-m04"
	E0805 23:25:34.171314       1 garbagecollector.go:399] error syncing item &garbagecollector.node{identity:garbagecollector.objectReference{OwnerReference:v1.OwnerReference{APIVersion:"coordination.k8s.io/v1", Kind:"Lease", Name:"ha-044175-m03", UID:"2da57d95-0f35-4fe3-97ee-17b52c4cd409", Controller:(*bool)(nil), BlockOwnerDeletion:(*bool)(nil)}, Namespace:"kube-node-lease"}, dependentsLock:sync.RWMutex{w:sync.Mutex{state:0, sema:0x0}, writerSem:0x0, readerSem:0x0, readerCount:atomic.Int32{_:atomic.noCopy{}, v:1}, readerWait:atomic.Int32{_:atomic.noCopy{}, v:0}}, dependents:map[*garbagecollector.node]struct {}{}, deletingDependents:false, deletingDependentsLock:sync.RWMutex{w:sync.Mutex{state:0, sema:0x0}, writerSem:0x0, readerSem:0x0, readerCount:atomic.Int32{_:atomic.noCopy{}, v:0}, readerWait:atomic.Int32{_:atomic.noCopy{}, v:0}}, beingDeleted:false, beingDeletedLock:sync.RWMutex{w:sync.Mutex{state:0, sema:0x0}, writerSem:0x0, readerSem:0x0, readerCount:atomic.Int32{_:atomic.noCopy{}, v:0}, readerW
ait:atomic.Int32{_:atomic.noCopy{}, v:0}}, virtual:false, virtualLock:sync.RWMutex{w:sync.Mutex{state:0, sema:0x0}, writerSem:0x0, readerSem:0x0, readerCount:atomic.Int32{_:atomic.noCopy{}, v:0}, readerWait:atomic.Int32{_:atomic.noCopy{}, v:0}}, owners:[]v1.OwnerReference{v1.OwnerReference{APIVersion:"v1", Kind:"Node", Name:"ha-044175-m03", UID:"1bc4c6c8-4adc-481f-977a-494e4c0a280d", Controller:(*bool)(nil), BlockOwnerDeletion:(*bool)(nil)}}}: leases.coordination.k8s.io "ha-044175-m03" not found
	E0805 23:25:41.887784       1 gc_controller.go:153] "Failed to get node" err="node \"ha-044175-m03\" not found" logger="pod-garbage-collector-controller" node="ha-044175-m03"
	E0805 23:25:41.887836       1 gc_controller.go:153] "Failed to get node" err="node \"ha-044175-m03\" not found" logger="pod-garbage-collector-controller" node="ha-044175-m03"
	E0805 23:25:41.887845       1 gc_controller.go:153] "Failed to get node" err="node \"ha-044175-m03\" not found" logger="pod-garbage-collector-controller" node="ha-044175-m03"
	E0805 23:25:41.887852       1 gc_controller.go:153] "Failed to get node" err="node \"ha-044175-m03\" not found" logger="pod-garbage-collector-controller" node="ha-044175-m03"
	E0805 23:25:41.887858       1 gc_controller.go:153] "Failed to get node" err="node \"ha-044175-m03\" not found" logger="pod-garbage-collector-controller" node="ha-044175-m03"
	E0805 23:26:01.888025       1 gc_controller.go:153] "Failed to get node" err="node \"ha-044175-m03\" not found" logger="pod-garbage-collector-controller" node="ha-044175-m03"
	E0805 23:26:01.888144       1 gc_controller.go:153] "Failed to get node" err="node \"ha-044175-m03\" not found" logger="pod-garbage-collector-controller" node="ha-044175-m03"
	E0805 23:26:01.888199       1 gc_controller.go:153] "Failed to get node" err="node \"ha-044175-m03\" not found" logger="pod-garbage-collector-controller" node="ha-044175-m03"
	E0805 23:26:01.888230       1 gc_controller.go:153] "Failed to get node" err="node \"ha-044175-m03\" not found" logger="pod-garbage-collector-controller" node="ha-044175-m03"
	E0805 23:26:01.888260       1 gc_controller.go:153] "Failed to get node" err="node \"ha-044175-m03\" not found" logger="pod-garbage-collector-controller" node="ha-044175-m03"
	I0805 23:26:10.094734       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="10.21967ms"
	I0805 23:26:10.094858       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="36.38µs"
	E0805 23:26:21.888795       1 gc_controller.go:153] "Failed to get node" err="node \"ha-044175-m03\" not found" logger="pod-garbage-collector-controller" node="ha-044175-m03"
	E0805 23:26:21.888843       1 gc_controller.go:153] "Failed to get node" err="node \"ha-044175-m03\" not found" logger="pod-garbage-collector-controller" node="ha-044175-m03"
	E0805 23:26:21.888850       1 gc_controller.go:153] "Failed to get node" err="node \"ha-044175-m03\" not found" logger="pod-garbage-collector-controller" node="ha-044175-m03"
	E0805 23:26:21.888855       1 gc_controller.go:153] "Failed to get node" err="node \"ha-044175-m03\" not found" logger="pod-garbage-collector-controller" node="ha-044175-m03"
	E0805 23:26:21.888859       1 gc_controller.go:153] "Failed to get node" err="node \"ha-044175-m03\" not found" logger="pod-garbage-collector-controller" node="ha-044175-m03"
	
	
	==> kube-controller-manager [dd436770dad332628ad6a3b7fea663d52dda62901d07f6c1bfa5cf82ddae4f61] <==
	I0805 23:22:04.085143       1 serving.go:380] Generated self-signed cert in-memory
	I0805 23:22:04.507970       1 controllermanager.go:189] "Starting" version="v1.30.3"
	I0805 23:22:04.508020       1 controllermanager.go:191] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0805 23:22:04.512018       1 secure_serving.go:213] Serving securely on 127.0.0.1:10257
	I0805 23:22:04.512893       1 dynamic_cafile_content.go:157] "Starting controller" name="request-header::/var/lib/minikube/certs/front-proxy-ca.crt"
	I0805 23:22:04.513091       1 tlsconfig.go:240] "Starting DynamicServingCertificateController"
	I0805 23:22:04.513203       1 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/var/lib/minikube/certs/ca.crt"
	E0805 23:22:25.246316       1 controllermanager.go:234] "Error building controller context" err="failed to wait for apiserver being healthy: timed out waiting for the condition: failed to get apiserver /healthz status: Get \"https://192.168.39.57:8443/healthz\": dial tcp 192.168.39.57:8443: connect: connection refused"
	
	
	==> kube-proxy [04c382fd4a32fe8685a6f643ecf7a291e4d542c2223975f9df92991fe566b12a] <==
	E0805 23:18:53.771110       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://control-plane.minikube.internal:8443/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=1856": dial tcp 192.168.39.254:8443: connect: no route to host
	W0805 23:18:53.770796       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8443/api/v1/services?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=1900": dial tcp 192.168.39.254:8443: connect: no route to host
	E0805 23:18:53.771183       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8443/api/v1/services?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=1900": dial tcp 192.168.39.254:8443: connect: no route to host
	W0805 23:18:53.770856       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%!D(MISSING)ha-044175&resourceVersion=1814": dial tcp 192.168.39.254:8443: connect: no route to host
	E0805 23:18:53.771257       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%!D(MISSING)ha-044175&resourceVersion=1814": dial tcp 192.168.39.254:8443: connect: no route to host
	W0805 23:19:01.963118       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.EndpointSlice: Get "https://control-plane.minikube.internal:8443/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=1856": dial tcp 192.168.39.254:8443: connect: no route to host
	E0805 23:19:01.963775       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://control-plane.minikube.internal:8443/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=1856": dial tcp 192.168.39.254:8443: connect: no route to host
	W0805 23:19:01.963863       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8443/api/v1/services?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=1900": dial tcp 192.168.39.254:8443: connect: no route to host
	E0805 23:19:01.963914       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8443/api/v1/services?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=1900": dial tcp 192.168.39.254:8443: connect: no route to host
	W0805 23:19:01.963157       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%!D(MISSING)ha-044175&resourceVersion=1814": dial tcp 192.168.39.254:8443: connect: no route to host
	E0805 23:19:01.964096       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%!D(MISSING)ha-044175&resourceVersion=1814": dial tcp 192.168.39.254:8443: connect: no route to host
	W0805 23:19:09.708798       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.EndpointSlice: Get "https://control-plane.minikube.internal:8443/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=1856": dial tcp 192.168.39.254:8443: connect: no route to host
	E0805 23:19:09.709004       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://control-plane.minikube.internal:8443/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=1856": dial tcp 192.168.39.254:8443: connect: no route to host
	W0805 23:19:12.781970       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%!D(MISSING)ha-044175&resourceVersion=1814": dial tcp 192.168.39.254:8443: connect: no route to host
	E0805 23:19:12.782201       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%!D(MISSING)ha-044175&resourceVersion=1814": dial tcp 192.168.39.254:8443: connect: no route to host
	W0805 23:19:15.851252       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8443/api/v1/services?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=1900": dial tcp 192.168.39.254:8443: connect: no route to host
	E0805 23:19:15.851494       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8443/api/v1/services?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=1900": dial tcp 192.168.39.254:8443: connect: no route to host
	W0805 23:19:28.140691       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.EndpointSlice: Get "https://control-plane.minikube.internal:8443/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=1856": dial tcp 192.168.39.254:8443: connect: no route to host
	E0805 23:19:28.141005       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://control-plane.minikube.internal:8443/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=1856": dial tcp 192.168.39.254:8443: connect: no route to host
	W0805 23:19:34.283692       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%!D(MISSING)ha-044175&resourceVersion=1814": dial tcp 192.168.39.254:8443: connect: no route to host
	E0805 23:19:34.283815       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%!D(MISSING)ha-044175&resourceVersion=1814": dial tcp 192.168.39.254:8443: connect: no route to host
	W0805 23:19:40.427939       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8443/api/v1/services?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=1900": dial tcp 192.168.39.254:8443: connect: no route to host
	E0805 23:19:40.428007       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8443/api/v1/services?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=1900": dial tcp 192.168.39.254:8443: connect: no route to host
	W0805 23:20:05.003291       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.EndpointSlice: Get "https://control-plane.minikube.internal:8443/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=1856": dial tcp 192.168.39.254:8443: connect: no route to host
	E0805 23:20:05.003527       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://control-plane.minikube.internal:8443/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=1856": dial tcp 192.168.39.254:8443: connect: no route to host
	
	
	==> kube-proxy [5f43d5e7445c285e5783c937039be219df8aaea8c9db899259f8d24c895a378c] <==
	I0805 23:22:04.516359       1 server_linux.go:69] "Using iptables proxy"
	E0805 23:22:04.812454       1 server.go:1051] "Failed to retrieve node info" err="Get \"https://control-plane.minikube.internal:8443/api/v1/nodes/ha-044175\": dial tcp 192.168.39.254:8443: connect: no route to host"
	E0805 23:22:07.883973       1 server.go:1051] "Failed to retrieve node info" err="Get \"https://control-plane.minikube.internal:8443/api/v1/nodes/ha-044175\": dial tcp 192.168.39.254:8443: connect: no route to host"
	E0805 23:22:10.955807       1 server.go:1051] "Failed to retrieve node info" err="Get \"https://control-plane.minikube.internal:8443/api/v1/nodes/ha-044175\": dial tcp 192.168.39.254:8443: connect: no route to host"
	E0805 23:22:17.100004       1 server.go:1051] "Failed to retrieve node info" err="Get \"https://control-plane.minikube.internal:8443/api/v1/nodes/ha-044175\": dial tcp 192.168.39.254:8443: connect: no route to host"
	E0805 23:22:26.316453       1 server.go:1051] "Failed to retrieve node info" err="Get \"https://control-plane.minikube.internal:8443/api/v1/nodes/ha-044175\": dial tcp 192.168.39.254:8443: connect: no route to host"
	I0805 23:22:44.743833       1 server.go:1062] "Successfully retrieved node IP(s)" IPs=["192.168.39.57"]
	I0805 23:22:44.885801       1 server_linux.go:143] "No iptables support for family" ipFamily="IPv6"
	I0805 23:22:44.885876       1 server.go:661] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0805 23:22:44.885897       1 server_linux.go:165] "Using iptables Proxier"
	I0805 23:22:44.889896       1 proxier.go:243] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I0805 23:22:44.890153       1 server.go:872] "Version info" version="v1.30.3"
	I0805 23:22:44.890757       1 server.go:874] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0805 23:22:44.892946       1 config.go:192] "Starting service config controller"
	I0805 23:22:44.892989       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0805 23:22:44.893021       1 config.go:101] "Starting endpoint slice config controller"
	I0805 23:22:44.893025       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0805 23:22:44.893846       1 config.go:319] "Starting node config controller"
	I0805 23:22:44.893878       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0805 23:22:44.994170       1 shared_informer.go:320] Caches are synced for node config
	I0805 23:22:44.994235       1 shared_informer.go:320] Caches are synced for service config
	I0805 23:22:44.994299       1 shared_informer.go:320] Caches are synced for endpoint slice config
	
	
	==> kube-scheduler [2a85f2254a23cdec7e89ff8de2e31b06ddf2853808330965760217f1fd834004] <==
	E0805 23:20:06.326083       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	W0805 23:20:06.367923       1 reflector.go:547] runtime/asm_amd64.s:1695: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0805 23:20:06.368059       1 reflector.go:150] runtime/asm_amd64.s:1695: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	W0805 23:20:06.481469       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0805 23:20:06.481665       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	W0805 23:20:06.782289       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0805 23:20:06.782468       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	W0805 23:20:06.813116       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E0805 23:20:06.813259       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	W0805 23:20:06.961524       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E0805 23:20:06.961669       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	W0805 23:20:07.046991       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0805 23:20:07.047051       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	W0805 23:20:07.785781       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0805 23:20:07.785833       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	W0805 23:20:07.786992       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0805 23:20:07.787065       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	W0805 23:20:07.902160       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0805 23:20:07.902307       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	W0805 23:20:09.112724       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0805 23:20:09.112781       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	I0805 23:20:12.071715       1 secure_serving.go:258] Stopped listening on 127.0.0.1:10259
	I0805 23:20:12.071832       1 tlsconfig.go:255] "Shutting down DynamicServingCertificateController"
	I0805 23:20:12.071997       1 configmap_cafile_content.go:223] "Shutting down controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	E0805 23:20:12.072309       1 run.go:74] "command failed" err="finished without leader elect"
	
	
	==> kube-scheduler [5537b3a8dbcb27d26dc336a48652fdd3385ec0fb3b5169e72e472a665bc2e3ed] <==
	W0805 23:22:41.426333       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StorageClass: Get "https://192.168.39.57:8443/apis/storage.k8s.io/v1/storageclasses?limit=500&resourceVersion=0": dial tcp 192.168.39.57:8443: connect: connection refused
	E0805 23:22:41.426454       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: Get "https://192.168.39.57:8443/apis/storage.k8s.io/v1/storageclasses?limit=500&resourceVersion=0": dial tcp 192.168.39.57:8443: connect: connection refused
	W0805 23:22:41.704636       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StatefulSet: Get "https://192.168.39.57:8443/apis/apps/v1/statefulsets?limit=500&resourceVersion=0": dial tcp 192.168.39.57:8443: connect: connection refused
	E0805 23:22:41.704759       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: Get "https://192.168.39.57:8443/apis/apps/v1/statefulsets?limit=500&resourceVersion=0": dial tcp 192.168.39.57:8443: connect: connection refused
	W0805 23:22:41.934682       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolumeClaim: Get "https://192.168.39.57:8443/api/v1/persistentvolumeclaims?limit=500&resourceVersion=0": dial tcp 192.168.39.57:8443: connect: connection refused
	E0805 23:22:41.934794       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: Get "https://192.168.39.57:8443/api/v1/persistentvolumeclaims?limit=500&resourceVersion=0": dial tcp 192.168.39.57:8443: connect: connection refused
	W0805 23:22:42.102653       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSINode: Get "https://192.168.39.57:8443/apis/storage.k8s.io/v1/csinodes?limit=500&resourceVersion=0": dial tcp 192.168.39.57:8443: connect: connection refused
	E0805 23:22:42.102712       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSINode: failed to list *v1.CSINode: Get "https://192.168.39.57:8443/apis/storage.k8s.io/v1/csinodes?limit=500&resourceVersion=0": dial tcp 192.168.39.57:8443: connect: connection refused
	W0805 23:22:42.201654       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIStorageCapacity: Get "https://192.168.39.57:8443/apis/storage.k8s.io/v1/csistoragecapacities?limit=500&resourceVersion=0": dial tcp 192.168.39.57:8443: connect: connection refused
	E0805 23:22:42.201741       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: Get "https://192.168.39.57:8443/apis/storage.k8s.io/v1/csistoragecapacities?limit=500&resourceVersion=0": dial tcp 192.168.39.57:8443: connect: connection refused
	W0805 23:22:42.264517       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolume: Get "https://192.168.39.57:8443/api/v1/persistentvolumes?limit=500&resourceVersion=0": dial tcp 192.168.39.57:8443: connect: connection refused
	E0805 23:22:42.264621       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: Get "https://192.168.39.57:8443/api/v1/persistentvolumes?limit=500&resourceVersion=0": dial tcp 192.168.39.57:8443: connect: connection refused
	W0805 23:22:42.570109       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicaSet: Get "https://192.168.39.57:8443/apis/apps/v1/replicasets?limit=500&resourceVersion=0": dial tcp 192.168.39.57:8443: connect: connection refused
	E0805 23:22:42.570180       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: Get "https://192.168.39.57:8443/apis/apps/v1/replicasets?limit=500&resourceVersion=0": dial tcp 192.168.39.57:8443: connect: connection refused
	W0805 23:22:44.707475       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E0805 23:22:44.707652       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	W0805 23:22:44.707887       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0805 23:22:44.707974       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	W0805 23:22:44.708083       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0805 23:22:44.708179       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	I0805 23:22:45.761153       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	E0805 23:25:19.130589       1 framework.go:1286] "Plugin Failed" err="Operation cannot be fulfilled on pods/binding \"busybox-fc5497c4f-sf69j\": pod busybox-fc5497c4f-sf69j is already assigned to node \"ha-044175-m04\"" plugin="DefaultBinder" pod="default/busybox-fc5497c4f-sf69j" node="ha-044175-m04"
	E0805 23:25:19.131305       1 schedule_one.go:338] "scheduler cache ForgetPod failed" err="pod 9cac57cf-878e-464c-9dc0-c1dab6d3cd9a(default/busybox-fc5497c4f-sf69j) wasn't assumed so cannot be forgotten" pod="default/busybox-fc5497c4f-sf69j"
	E0805 23:25:19.133490       1 schedule_one.go:1046] "Error scheduling pod; retrying" err="running Bind plugin \"DefaultBinder\": Operation cannot be fulfilled on pods/binding \"busybox-fc5497c4f-sf69j\": pod busybox-fc5497c4f-sf69j is already assigned to node \"ha-044175-m04\"" pod="default/busybox-fc5497c4f-sf69j"
	I0805 23:25:19.133657       1 schedule_one.go:1059] "Pod has been assigned to node. Abort adding it back to queue." pod="default/busybox-fc5497c4f-sf69j" node="ha-044175-m04"
	
	
	==> kubelet <==
	Aug 05 23:23:44 ha-044175 kubelet[1390]: E0805 23:23:44.737044    1390 iptables.go:577] "Could not set up iptables canary" err=<
	Aug 05 23:23:44 ha-044175 kubelet[1390]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Aug 05 23:23:44 ha-044175 kubelet[1390]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Aug 05 23:23:44 ha-044175 kubelet[1390]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Aug 05 23:23:44 ha-044175 kubelet[1390]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Aug 05 23:24:44 ha-044175 kubelet[1390]: E0805 23:24:44.739120    1390 iptables.go:577] "Could not set up iptables canary" err=<
	Aug 05 23:24:44 ha-044175 kubelet[1390]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Aug 05 23:24:44 ha-044175 kubelet[1390]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Aug 05 23:24:44 ha-044175 kubelet[1390]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Aug 05 23:24:44 ha-044175 kubelet[1390]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Aug 05 23:25:44 ha-044175 kubelet[1390]: E0805 23:25:44.733128    1390 iptables.go:577] "Could not set up iptables canary" err=<
	Aug 05 23:25:44 ha-044175 kubelet[1390]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Aug 05 23:25:44 ha-044175 kubelet[1390]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Aug 05 23:25:44 ha-044175 kubelet[1390]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Aug 05 23:25:44 ha-044175 kubelet[1390]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Aug 05 23:26:44 ha-044175 kubelet[1390]: E0805 23:26:44.737437    1390 iptables.go:577] "Could not set up iptables canary" err=<
	Aug 05 23:26:44 ha-044175 kubelet[1390]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Aug 05 23:26:44 ha-044175 kubelet[1390]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Aug 05 23:26:44 ha-044175 kubelet[1390]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Aug 05 23:26:44 ha-044175 kubelet[1390]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Aug 05 23:27:44 ha-044175 kubelet[1390]: E0805 23:27:44.733339    1390 iptables.go:577] "Could not set up iptables canary" err=<
	Aug 05 23:27:44 ha-044175 kubelet[1390]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Aug 05 23:27:44 ha-044175 kubelet[1390]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Aug 05 23:27:44 ha-044175 kubelet[1390]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Aug 05 23:27:44 ha-044175 kubelet[1390]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	

                                                
                                                
-- /stdout --
** stderr ** 
	E0805 23:27:55.767280   38593 logs.go:258] failed to output last start logs: failed to read file /home/jenkins/minikube-integration/19373-9606/.minikube/logs/lastStart.txt: bufio.Scanner: token too long

                                                
                                                
** /stderr **
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p ha-044175 -n ha-044175
helpers_test.go:261: (dbg) Run:  kubectl --context ha-044175 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestMultiControlPlane/serial/StopCluster FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
--- FAIL: TestMultiControlPlane/serial/StopCluster (141.61s)

                                                
                                    
x
+
TestMultiNode/serial/RestartKeepsNodes (334.79s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartKeepsNodes
multinode_test.go:314: (dbg) Run:  out/minikube-linux-amd64 node list -p multinode-342677
multinode_test.go:321: (dbg) Run:  out/minikube-linux-amd64 stop -p multinode-342677
E0805 23:43:16.354232   16792 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19373-9606/.minikube/profiles/addons-435364/client.crt: no such file or directory
multinode_test.go:321: (dbg) Non-zero exit: out/minikube-linux-amd64 stop -p multinode-342677: exit status 82 (2m1.892784562s)

                                                
                                                
-- stdout --
	* Stopping node "multinode-342677-m03"  ...
	* Stopping node "multinode-342677-m02"  ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_STOP_TIMEOUT: Unable to stop VM: Temporary Error: stop: unable to stop vm, current state "Running"
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_stop_24ef9d461bcadf056806ccb9bba8d5f9f54754a6_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
multinode_test.go:323: failed to run minikube stop. args "out/minikube-linux-amd64 node list -p multinode-342677" : exit status 82
multinode_test.go:326: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-342677 --wait=true -v=8 --alsologtostderr
E0805 23:44:53.026009   16792 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19373-9606/.minikube/profiles/functional-299463/client.crt: no such file or directory
E0805 23:46:49.981277   16792 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19373-9606/.minikube/profiles/functional-299463/client.crt: no such file or directory
multinode_test.go:326: (dbg) Done: out/minikube-linux-amd64 start -p multinode-342677 --wait=true -v=8 --alsologtostderr: (3m30.642796772s)
multinode_test.go:331: (dbg) Run:  out/minikube-linux-amd64 node list -p multinode-342677
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p multinode-342677 -n multinode-342677
helpers_test.go:244: <<< TestMultiNode/serial/RestartKeepsNodes FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestMultiNode/serial/RestartKeepsNodes]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p multinode-342677 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p multinode-342677 logs -n 25: (1.496497494s)
helpers_test.go:252: TestMultiNode/serial/RestartKeepsNodes logs: 
-- stdout --
	
	==> Audit <==
	|---------|-----------------------------------------------------------------------------------------|------------------|---------|---------|---------------------|---------------------|
	| Command |                                          Args                                           |     Profile      |  User   | Version |     Start Time      |      End Time       |
	|---------|-----------------------------------------------------------------------------------------|------------------|---------|---------|---------------------|---------------------|
	| ssh     | multinode-342677 ssh -n                                                                 | multinode-342677 | jenkins | v1.33.1 | 05 Aug 24 23:41 UTC | 05 Aug 24 23:41 UTC |
	|         | multinode-342677-m02 sudo cat                                                           |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |         |                     |                     |
	| cp      | multinode-342677 cp multinode-342677-m02:/home/docker/cp-test.txt                       | multinode-342677 | jenkins | v1.33.1 | 05 Aug 24 23:41 UTC | 05 Aug 24 23:41 UTC |
	|         | /tmp/TestMultiNodeserialCopyFile1038504423/001/cp-test_multinode-342677-m02.txt         |                  |         |         |                     |                     |
	| ssh     | multinode-342677 ssh -n                                                                 | multinode-342677 | jenkins | v1.33.1 | 05 Aug 24 23:41 UTC | 05 Aug 24 23:41 UTC |
	|         | multinode-342677-m02 sudo cat                                                           |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |         |                     |                     |
	| cp      | multinode-342677 cp multinode-342677-m02:/home/docker/cp-test.txt                       | multinode-342677 | jenkins | v1.33.1 | 05 Aug 24 23:41 UTC | 05 Aug 24 23:41 UTC |
	|         | multinode-342677:/home/docker/cp-test_multinode-342677-m02_multinode-342677.txt         |                  |         |         |                     |                     |
	| ssh     | multinode-342677 ssh -n                                                                 | multinode-342677 | jenkins | v1.33.1 | 05 Aug 24 23:41 UTC | 05 Aug 24 23:41 UTC |
	|         | multinode-342677-m02 sudo cat                                                           |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |         |                     |                     |
	| ssh     | multinode-342677 ssh -n multinode-342677 sudo cat                                       | multinode-342677 | jenkins | v1.33.1 | 05 Aug 24 23:41 UTC | 05 Aug 24 23:41 UTC |
	|         | /home/docker/cp-test_multinode-342677-m02_multinode-342677.txt                          |                  |         |         |                     |                     |
	| cp      | multinode-342677 cp multinode-342677-m02:/home/docker/cp-test.txt                       | multinode-342677 | jenkins | v1.33.1 | 05 Aug 24 23:41 UTC | 05 Aug 24 23:41 UTC |
	|         | multinode-342677-m03:/home/docker/cp-test_multinode-342677-m02_multinode-342677-m03.txt |                  |         |         |                     |                     |
	| ssh     | multinode-342677 ssh -n                                                                 | multinode-342677 | jenkins | v1.33.1 | 05 Aug 24 23:41 UTC | 05 Aug 24 23:41 UTC |
	|         | multinode-342677-m02 sudo cat                                                           |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |         |                     |                     |
	| ssh     | multinode-342677 ssh -n multinode-342677-m03 sudo cat                                   | multinode-342677 | jenkins | v1.33.1 | 05 Aug 24 23:41 UTC | 05 Aug 24 23:41 UTC |
	|         | /home/docker/cp-test_multinode-342677-m02_multinode-342677-m03.txt                      |                  |         |         |                     |                     |
	| cp      | multinode-342677 cp testdata/cp-test.txt                                                | multinode-342677 | jenkins | v1.33.1 | 05 Aug 24 23:41 UTC | 05 Aug 24 23:41 UTC |
	|         | multinode-342677-m03:/home/docker/cp-test.txt                                           |                  |         |         |                     |                     |
	| ssh     | multinode-342677 ssh -n                                                                 | multinode-342677 | jenkins | v1.33.1 | 05 Aug 24 23:41 UTC | 05 Aug 24 23:41 UTC |
	|         | multinode-342677-m03 sudo cat                                                           |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |         |                     |                     |
	| cp      | multinode-342677 cp multinode-342677-m03:/home/docker/cp-test.txt                       | multinode-342677 | jenkins | v1.33.1 | 05 Aug 24 23:41 UTC | 05 Aug 24 23:41 UTC |
	|         | /tmp/TestMultiNodeserialCopyFile1038504423/001/cp-test_multinode-342677-m03.txt         |                  |         |         |                     |                     |
	| ssh     | multinode-342677 ssh -n                                                                 | multinode-342677 | jenkins | v1.33.1 | 05 Aug 24 23:41 UTC | 05 Aug 24 23:41 UTC |
	|         | multinode-342677-m03 sudo cat                                                           |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |         |                     |                     |
	| cp      | multinode-342677 cp multinode-342677-m03:/home/docker/cp-test.txt                       | multinode-342677 | jenkins | v1.33.1 | 05 Aug 24 23:41 UTC | 05 Aug 24 23:41 UTC |
	|         | multinode-342677:/home/docker/cp-test_multinode-342677-m03_multinode-342677.txt         |                  |         |         |                     |                     |
	| ssh     | multinode-342677 ssh -n                                                                 | multinode-342677 | jenkins | v1.33.1 | 05 Aug 24 23:41 UTC | 05 Aug 24 23:41 UTC |
	|         | multinode-342677-m03 sudo cat                                                           |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |         |                     |                     |
	| ssh     | multinode-342677 ssh -n multinode-342677 sudo cat                                       | multinode-342677 | jenkins | v1.33.1 | 05 Aug 24 23:41 UTC | 05 Aug 24 23:42 UTC |
	|         | /home/docker/cp-test_multinode-342677-m03_multinode-342677.txt                          |                  |         |         |                     |                     |
	| cp      | multinode-342677 cp multinode-342677-m03:/home/docker/cp-test.txt                       | multinode-342677 | jenkins | v1.33.1 | 05 Aug 24 23:42 UTC | 05 Aug 24 23:42 UTC |
	|         | multinode-342677-m02:/home/docker/cp-test_multinode-342677-m03_multinode-342677-m02.txt |                  |         |         |                     |                     |
	| ssh     | multinode-342677 ssh -n                                                                 | multinode-342677 | jenkins | v1.33.1 | 05 Aug 24 23:42 UTC | 05 Aug 24 23:42 UTC |
	|         | multinode-342677-m03 sudo cat                                                           |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |         |                     |                     |
	| ssh     | multinode-342677 ssh -n multinode-342677-m02 sudo cat                                   | multinode-342677 | jenkins | v1.33.1 | 05 Aug 24 23:42 UTC | 05 Aug 24 23:42 UTC |
	|         | /home/docker/cp-test_multinode-342677-m03_multinode-342677-m02.txt                      |                  |         |         |                     |                     |
	| node    | multinode-342677 node stop m03                                                          | multinode-342677 | jenkins | v1.33.1 | 05 Aug 24 23:42 UTC | 05 Aug 24 23:42 UTC |
	| node    | multinode-342677 node start                                                             | multinode-342677 | jenkins | v1.33.1 | 05 Aug 24 23:42 UTC | 05 Aug 24 23:42 UTC |
	|         | m03 -v=7 --alsologtostderr                                                              |                  |         |         |                     |                     |
	| node    | list -p multinode-342677                                                                | multinode-342677 | jenkins | v1.33.1 | 05 Aug 24 23:42 UTC |                     |
	| stop    | -p multinode-342677                                                                     | multinode-342677 | jenkins | v1.33.1 | 05 Aug 24 23:42 UTC |                     |
	| start   | -p multinode-342677                                                                     | multinode-342677 | jenkins | v1.33.1 | 05 Aug 24 23:44 UTC | 05 Aug 24 23:48 UTC |
	|         | --wait=true -v=8                                                                        |                  |         |         |                     |                     |
	|         | --alsologtostderr                                                                       |                  |         |         |                     |                     |
	| node    | list -p multinode-342677                                                                | multinode-342677 | jenkins | v1.33.1 | 05 Aug 24 23:48 UTC |                     |
	|---------|-----------------------------------------------------------------------------------------|------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/08/05 23:44:43
	Running on machine: ubuntu-20-agent-10
	Binary: Built with gc go1.22.5 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0805 23:44:43.732401   47941 out.go:291] Setting OutFile to fd 1 ...
	I0805 23:44:43.732517   47941 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0805 23:44:43.732525   47941 out.go:304] Setting ErrFile to fd 2...
	I0805 23:44:43.732529   47941 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0805 23:44:43.732699   47941 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19373-9606/.minikube/bin
	I0805 23:44:43.733218   47941 out.go:298] Setting JSON to false
	I0805 23:44:43.734095   47941 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-10","uptime":5230,"bootTime":1722896254,"procs":184,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1065-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0805 23:44:43.734148   47941 start.go:139] virtualization: kvm guest
	I0805 23:44:43.737197   47941 out.go:177] * [multinode-342677] minikube v1.33.1 on Ubuntu 20.04 (kvm/amd64)
	I0805 23:44:43.738673   47941 notify.go:220] Checking for updates...
	I0805 23:44:43.738686   47941 out.go:177]   - MINIKUBE_LOCATION=19373
	I0805 23:44:43.740269   47941 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0805 23:44:43.741803   47941 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19373-9606/kubeconfig
	I0805 23:44:43.743344   47941 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19373-9606/.minikube
	I0805 23:44:43.744742   47941 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0805 23:44:43.746273   47941 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0805 23:44:43.748007   47941 config.go:182] Loaded profile config "multinode-342677": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0805 23:44:43.748104   47941 driver.go:392] Setting default libvirt URI to qemu:///system
	I0805 23:44:43.748463   47941 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0805 23:44:43.748512   47941 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0805 23:44:43.764312   47941 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39423
	I0805 23:44:43.764761   47941 main.go:141] libmachine: () Calling .GetVersion
	I0805 23:44:43.765280   47941 main.go:141] libmachine: Using API Version  1
	I0805 23:44:43.765297   47941 main.go:141] libmachine: () Calling .SetConfigRaw
	I0805 23:44:43.765586   47941 main.go:141] libmachine: () Calling .GetMachineName
	I0805 23:44:43.765773   47941 main.go:141] libmachine: (multinode-342677) Calling .DriverName
	I0805 23:44:43.803461   47941 out.go:177] * Using the kvm2 driver based on existing profile
	I0805 23:44:43.804752   47941 start.go:297] selected driver: kvm2
	I0805 23:44:43.804776   47941 start.go:901] validating driver "kvm2" against &{Name:multinode-342677 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19339/minikube-v1.33.1-1722248113-19339-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kubern
etesVersion:v1.30.3 ClusterName:multinode-342677 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.10 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.89 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:false Worker:true} {Name:m03 IP:192.168.39.75 Port:0 KubernetesVersion:v1.30.3 ContainerRuntime: ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingres
s-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirr
or: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0805 23:44:43.804920   47941 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0805 23:44:43.805266   47941 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0805 23:44:43.805347   47941 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/19373-9606/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0805 23:44:43.820369   47941 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.33.1
	I0805 23:44:43.821188   47941 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0805 23:44:43.821264   47941 cni.go:84] Creating CNI manager for ""
	I0805 23:44:43.821279   47941 cni.go:136] multinode detected (3 nodes found), recommending kindnet
	I0805 23:44:43.821340   47941 start.go:340] cluster config:
	{Name:multinode-342677 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19339/minikube-v1.33.1-1722248113-19339-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.3 ClusterName:multinode-342677 Namespace:default APIServerHA
VIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.10 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.89 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:false Worker:true} {Name:m03 IP:192.168.39.75 Port:0 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kon
g:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath
: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0805 23:44:43.821465   47941 iso.go:125] acquiring lock: {Name:mk54a637ed625e04bb2b6adf973b61c976cd6d35 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0805 23:44:43.823213   47941 out.go:177] * Starting "multinode-342677" primary control-plane node in "multinode-342677" cluster
	I0805 23:44:43.825018   47941 preload.go:131] Checking if preload exists for k8s version v1.30.3 and runtime crio
	I0805 23:44:43.825059   47941 preload.go:146] Found local preload: /home/jenkins/minikube-integration/19373-9606/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-cri-o-overlay-amd64.tar.lz4
	I0805 23:44:43.825073   47941 cache.go:56] Caching tarball of preloaded images
	I0805 23:44:43.825195   47941 preload.go:172] Found /home/jenkins/minikube-integration/19373-9606/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0805 23:44:43.825207   47941 cache.go:59] Finished verifying existence of preloaded tar for v1.30.3 on crio
	I0805 23:44:43.825357   47941 profile.go:143] Saving config to /home/jenkins/minikube-integration/19373-9606/.minikube/profiles/multinode-342677/config.json ...
	I0805 23:44:43.825571   47941 start.go:360] acquireMachinesLock for multinode-342677: {Name:mkd2ba511c39504598222edbf83078b718329186 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0805 23:44:43.825628   47941 start.go:364] duration metric: took 33.872µs to acquireMachinesLock for "multinode-342677"
	I0805 23:44:43.825647   47941 start.go:96] Skipping create...Using existing machine configuration
	I0805 23:44:43.825656   47941 fix.go:54] fixHost starting: 
	I0805 23:44:43.825912   47941 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0805 23:44:43.825947   47941 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0805 23:44:43.839700   47941 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36433
	I0805 23:44:43.840074   47941 main.go:141] libmachine: () Calling .GetVersion
	I0805 23:44:43.840544   47941 main.go:141] libmachine: Using API Version  1
	I0805 23:44:43.840565   47941 main.go:141] libmachine: () Calling .SetConfigRaw
	I0805 23:44:43.840923   47941 main.go:141] libmachine: () Calling .GetMachineName
	I0805 23:44:43.841136   47941 main.go:141] libmachine: (multinode-342677) Calling .DriverName
	I0805 23:44:43.841288   47941 main.go:141] libmachine: (multinode-342677) Calling .GetState
	I0805 23:44:43.843192   47941 fix.go:112] recreateIfNeeded on multinode-342677: state=Running err=<nil>
	W0805 23:44:43.843209   47941 fix.go:138] unexpected machine state, will restart: <nil>
	I0805 23:44:43.845395   47941 out.go:177] * Updating the running kvm2 "multinode-342677" VM ...
	I0805 23:44:43.846838   47941 machine.go:94] provisionDockerMachine start ...
	I0805 23:44:43.846860   47941 main.go:141] libmachine: (multinode-342677) Calling .DriverName
	I0805 23:44:43.847163   47941 main.go:141] libmachine: (multinode-342677) Calling .GetSSHHostname
	I0805 23:44:43.849681   47941 main.go:141] libmachine: (multinode-342677) DBG | domain multinode-342677 has defined MAC address 52:54:00:90:94:1a in network mk-multinode-342677
	I0805 23:44:43.850237   47941 main.go:141] libmachine: (multinode-342677) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:90:94:1a", ip: ""} in network mk-multinode-342677: {Iface:virbr1 ExpiryTime:2024-08-06 00:39:05 +0000 UTC Type:0 Mac:52:54:00:90:94:1a Iaid: IPaddr:192.168.39.10 Prefix:24 Hostname:multinode-342677 Clientid:01:52:54:00:90:94:1a}
	I0805 23:44:43.850275   47941 main.go:141] libmachine: (multinode-342677) DBG | domain multinode-342677 has defined IP address 192.168.39.10 and MAC address 52:54:00:90:94:1a in network mk-multinode-342677
	I0805 23:44:43.850410   47941 main.go:141] libmachine: (multinode-342677) Calling .GetSSHPort
	I0805 23:44:43.850596   47941 main.go:141] libmachine: (multinode-342677) Calling .GetSSHKeyPath
	I0805 23:44:43.850743   47941 main.go:141] libmachine: (multinode-342677) Calling .GetSSHKeyPath
	I0805 23:44:43.851009   47941 main.go:141] libmachine: (multinode-342677) Calling .GetSSHUsername
	I0805 23:44:43.851248   47941 main.go:141] libmachine: Using SSH client type: native
	I0805 23:44:43.851461   47941 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.10 22 <nil> <nil>}
	I0805 23:44:43.851475   47941 main.go:141] libmachine: About to run SSH command:
	hostname
	I0805 23:44:43.968076   47941 main.go:141] libmachine: SSH cmd err, output: <nil>: multinode-342677
	
	I0805 23:44:43.968110   47941 main.go:141] libmachine: (multinode-342677) Calling .GetMachineName
	I0805 23:44:43.968360   47941 buildroot.go:166] provisioning hostname "multinode-342677"
	I0805 23:44:43.968378   47941 main.go:141] libmachine: (multinode-342677) Calling .GetMachineName
	I0805 23:44:43.968574   47941 main.go:141] libmachine: (multinode-342677) Calling .GetSSHHostname
	I0805 23:44:43.971403   47941 main.go:141] libmachine: (multinode-342677) DBG | domain multinode-342677 has defined MAC address 52:54:00:90:94:1a in network mk-multinode-342677
	I0805 23:44:43.971733   47941 main.go:141] libmachine: (multinode-342677) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:90:94:1a", ip: ""} in network mk-multinode-342677: {Iface:virbr1 ExpiryTime:2024-08-06 00:39:05 +0000 UTC Type:0 Mac:52:54:00:90:94:1a Iaid: IPaddr:192.168.39.10 Prefix:24 Hostname:multinode-342677 Clientid:01:52:54:00:90:94:1a}
	I0805 23:44:43.971757   47941 main.go:141] libmachine: (multinode-342677) DBG | domain multinode-342677 has defined IP address 192.168.39.10 and MAC address 52:54:00:90:94:1a in network mk-multinode-342677
	I0805 23:44:43.971887   47941 main.go:141] libmachine: (multinode-342677) Calling .GetSSHPort
	I0805 23:44:43.972051   47941 main.go:141] libmachine: (multinode-342677) Calling .GetSSHKeyPath
	I0805 23:44:43.972207   47941 main.go:141] libmachine: (multinode-342677) Calling .GetSSHKeyPath
	I0805 23:44:43.972307   47941 main.go:141] libmachine: (multinode-342677) Calling .GetSSHUsername
	I0805 23:44:43.972478   47941 main.go:141] libmachine: Using SSH client type: native
	I0805 23:44:43.972645   47941 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.10 22 <nil> <nil>}
	I0805 23:44:43.972658   47941 main.go:141] libmachine: About to run SSH command:
	sudo hostname multinode-342677 && echo "multinode-342677" | sudo tee /etc/hostname
	I0805 23:44:44.116141   47941 main.go:141] libmachine: SSH cmd err, output: <nil>: multinode-342677
	
	I0805 23:44:44.116166   47941 main.go:141] libmachine: (multinode-342677) Calling .GetSSHHostname
	I0805 23:44:44.119130   47941 main.go:141] libmachine: (multinode-342677) DBG | domain multinode-342677 has defined MAC address 52:54:00:90:94:1a in network mk-multinode-342677
	I0805 23:44:44.119520   47941 main.go:141] libmachine: (multinode-342677) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:90:94:1a", ip: ""} in network mk-multinode-342677: {Iface:virbr1 ExpiryTime:2024-08-06 00:39:05 +0000 UTC Type:0 Mac:52:54:00:90:94:1a Iaid: IPaddr:192.168.39.10 Prefix:24 Hostname:multinode-342677 Clientid:01:52:54:00:90:94:1a}
	I0805 23:44:44.119550   47941 main.go:141] libmachine: (multinode-342677) DBG | domain multinode-342677 has defined IP address 192.168.39.10 and MAC address 52:54:00:90:94:1a in network mk-multinode-342677
	I0805 23:44:44.119778   47941 main.go:141] libmachine: (multinode-342677) Calling .GetSSHPort
	I0805 23:44:44.119995   47941 main.go:141] libmachine: (multinode-342677) Calling .GetSSHKeyPath
	I0805 23:44:44.120163   47941 main.go:141] libmachine: (multinode-342677) Calling .GetSSHKeyPath
	I0805 23:44:44.120323   47941 main.go:141] libmachine: (multinode-342677) Calling .GetSSHUsername
	I0805 23:44:44.120508   47941 main.go:141] libmachine: Using SSH client type: native
	I0805 23:44:44.120727   47941 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.10 22 <nil> <nil>}
	I0805 23:44:44.120745   47941 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\smultinode-342677' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 multinode-342677/g' /etc/hosts;
				else 
					echo '127.0.1.1 multinode-342677' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0805 23:44:44.236051   47941 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0805 23:44:44.236094   47941 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19373-9606/.minikube CaCertPath:/home/jenkins/minikube-integration/19373-9606/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19373-9606/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19373-9606/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19373-9606/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19373-9606/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19373-9606/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19373-9606/.minikube}
	I0805 23:44:44.236134   47941 buildroot.go:174] setting up certificates
	I0805 23:44:44.236142   47941 provision.go:84] configureAuth start
	I0805 23:44:44.236152   47941 main.go:141] libmachine: (multinode-342677) Calling .GetMachineName
	I0805 23:44:44.236418   47941 main.go:141] libmachine: (multinode-342677) Calling .GetIP
	I0805 23:44:44.239064   47941 main.go:141] libmachine: (multinode-342677) DBG | domain multinode-342677 has defined MAC address 52:54:00:90:94:1a in network mk-multinode-342677
	I0805 23:44:44.239413   47941 main.go:141] libmachine: (multinode-342677) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:90:94:1a", ip: ""} in network mk-multinode-342677: {Iface:virbr1 ExpiryTime:2024-08-06 00:39:05 +0000 UTC Type:0 Mac:52:54:00:90:94:1a Iaid: IPaddr:192.168.39.10 Prefix:24 Hostname:multinode-342677 Clientid:01:52:54:00:90:94:1a}
	I0805 23:44:44.239438   47941 main.go:141] libmachine: (multinode-342677) DBG | domain multinode-342677 has defined IP address 192.168.39.10 and MAC address 52:54:00:90:94:1a in network mk-multinode-342677
	I0805 23:44:44.239627   47941 main.go:141] libmachine: (multinode-342677) Calling .GetSSHHostname
	I0805 23:44:44.242249   47941 main.go:141] libmachine: (multinode-342677) DBG | domain multinode-342677 has defined MAC address 52:54:00:90:94:1a in network mk-multinode-342677
	I0805 23:44:44.242732   47941 main.go:141] libmachine: (multinode-342677) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:90:94:1a", ip: ""} in network mk-multinode-342677: {Iface:virbr1 ExpiryTime:2024-08-06 00:39:05 +0000 UTC Type:0 Mac:52:54:00:90:94:1a Iaid: IPaddr:192.168.39.10 Prefix:24 Hostname:multinode-342677 Clientid:01:52:54:00:90:94:1a}
	I0805 23:44:44.242772   47941 main.go:141] libmachine: (multinode-342677) DBG | domain multinode-342677 has defined IP address 192.168.39.10 and MAC address 52:54:00:90:94:1a in network mk-multinode-342677
	I0805 23:44:44.242891   47941 provision.go:143] copyHostCerts
	I0805 23:44:44.242931   47941 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19373-9606/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/19373-9606/.minikube/ca.pem
	I0805 23:44:44.242980   47941 exec_runner.go:144] found /home/jenkins/minikube-integration/19373-9606/.minikube/ca.pem, removing ...
	I0805 23:44:44.242994   47941 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19373-9606/.minikube/ca.pem
	I0805 23:44:44.243115   47941 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19373-9606/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19373-9606/.minikube/ca.pem (1082 bytes)
	I0805 23:44:44.243226   47941 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19373-9606/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/19373-9606/.minikube/cert.pem
	I0805 23:44:44.243250   47941 exec_runner.go:144] found /home/jenkins/minikube-integration/19373-9606/.minikube/cert.pem, removing ...
	I0805 23:44:44.243257   47941 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19373-9606/.minikube/cert.pem
	I0805 23:44:44.243303   47941 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19373-9606/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19373-9606/.minikube/cert.pem (1123 bytes)
	I0805 23:44:44.243399   47941 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19373-9606/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/19373-9606/.minikube/key.pem
	I0805 23:44:44.243422   47941 exec_runner.go:144] found /home/jenkins/minikube-integration/19373-9606/.minikube/key.pem, removing ...
	I0805 23:44:44.243432   47941 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19373-9606/.minikube/key.pem
	I0805 23:44:44.243473   47941 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19373-9606/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19373-9606/.minikube/key.pem (1679 bytes)
	I0805 23:44:44.243553   47941 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19373-9606/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19373-9606/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19373-9606/.minikube/certs/ca-key.pem org=jenkins.multinode-342677 san=[127.0.0.1 192.168.39.10 localhost minikube multinode-342677]
	I0805 23:44:44.492597   47941 provision.go:177] copyRemoteCerts
	I0805 23:44:44.492669   47941 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0805 23:44:44.492696   47941 main.go:141] libmachine: (multinode-342677) Calling .GetSSHHostname
	I0805 23:44:44.495380   47941 main.go:141] libmachine: (multinode-342677) DBG | domain multinode-342677 has defined MAC address 52:54:00:90:94:1a in network mk-multinode-342677
	I0805 23:44:44.495750   47941 main.go:141] libmachine: (multinode-342677) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:90:94:1a", ip: ""} in network mk-multinode-342677: {Iface:virbr1 ExpiryTime:2024-08-06 00:39:05 +0000 UTC Type:0 Mac:52:54:00:90:94:1a Iaid: IPaddr:192.168.39.10 Prefix:24 Hostname:multinode-342677 Clientid:01:52:54:00:90:94:1a}
	I0805 23:44:44.495771   47941 main.go:141] libmachine: (multinode-342677) DBG | domain multinode-342677 has defined IP address 192.168.39.10 and MAC address 52:54:00:90:94:1a in network mk-multinode-342677
	I0805 23:44:44.495988   47941 main.go:141] libmachine: (multinode-342677) Calling .GetSSHPort
	I0805 23:44:44.496214   47941 main.go:141] libmachine: (multinode-342677) Calling .GetSSHKeyPath
	I0805 23:44:44.496388   47941 main.go:141] libmachine: (multinode-342677) Calling .GetSSHUsername
	I0805 23:44:44.496495   47941 sshutil.go:53] new ssh client: &{IP:192.168.39.10 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19373-9606/.minikube/machines/multinode-342677/id_rsa Username:docker}
	I0805 23:44:44.585516   47941 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19373-9606/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0805 23:44:44.585604   47941 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19373-9606/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0805 23:44:44.614439   47941 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19373-9606/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0805 23:44:44.614516   47941 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19373-9606/.minikube/machines/server.pem --> /etc/docker/server.pem (1216 bytes)
	I0805 23:44:44.640074   47941 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19373-9606/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0805 23:44:44.640156   47941 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19373-9606/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0805 23:44:44.665760   47941 provision.go:87] duration metric: took 429.607552ms to configureAuth
	I0805 23:44:44.665790   47941 buildroot.go:189] setting minikube options for container-runtime
	I0805 23:44:44.666036   47941 config.go:182] Loaded profile config "multinode-342677": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0805 23:44:44.666108   47941 main.go:141] libmachine: (multinode-342677) Calling .GetSSHHostname
	I0805 23:44:44.668519   47941 main.go:141] libmachine: (multinode-342677) DBG | domain multinode-342677 has defined MAC address 52:54:00:90:94:1a in network mk-multinode-342677
	I0805 23:44:44.668845   47941 main.go:141] libmachine: (multinode-342677) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:90:94:1a", ip: ""} in network mk-multinode-342677: {Iface:virbr1 ExpiryTime:2024-08-06 00:39:05 +0000 UTC Type:0 Mac:52:54:00:90:94:1a Iaid: IPaddr:192.168.39.10 Prefix:24 Hostname:multinode-342677 Clientid:01:52:54:00:90:94:1a}
	I0805 23:44:44.668869   47941 main.go:141] libmachine: (multinode-342677) DBG | domain multinode-342677 has defined IP address 192.168.39.10 and MAC address 52:54:00:90:94:1a in network mk-multinode-342677
	I0805 23:44:44.669018   47941 main.go:141] libmachine: (multinode-342677) Calling .GetSSHPort
	I0805 23:44:44.669191   47941 main.go:141] libmachine: (multinode-342677) Calling .GetSSHKeyPath
	I0805 23:44:44.669361   47941 main.go:141] libmachine: (multinode-342677) Calling .GetSSHKeyPath
	I0805 23:44:44.669502   47941 main.go:141] libmachine: (multinode-342677) Calling .GetSSHUsername
	I0805 23:44:44.669675   47941 main.go:141] libmachine: Using SSH client type: native
	I0805 23:44:44.669874   47941 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.10 22 <nil> <nil>}
	I0805 23:44:44.669894   47941 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0805 23:46:15.356788   47941 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0805 23:46:15.356817   47941 machine.go:97] duration metric: took 1m31.509963334s to provisionDockerMachine
	I0805 23:46:15.356832   47941 start.go:293] postStartSetup for "multinode-342677" (driver="kvm2")
	I0805 23:46:15.356845   47941 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0805 23:46:15.356864   47941 main.go:141] libmachine: (multinode-342677) Calling .DriverName
	I0805 23:46:15.357171   47941 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0805 23:46:15.357207   47941 main.go:141] libmachine: (multinode-342677) Calling .GetSSHHostname
	I0805 23:46:15.360715   47941 main.go:141] libmachine: (multinode-342677) DBG | domain multinode-342677 has defined MAC address 52:54:00:90:94:1a in network mk-multinode-342677
	I0805 23:46:15.361255   47941 main.go:141] libmachine: (multinode-342677) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:90:94:1a", ip: ""} in network mk-multinode-342677: {Iface:virbr1 ExpiryTime:2024-08-06 00:39:05 +0000 UTC Type:0 Mac:52:54:00:90:94:1a Iaid: IPaddr:192.168.39.10 Prefix:24 Hostname:multinode-342677 Clientid:01:52:54:00:90:94:1a}
	I0805 23:46:15.361288   47941 main.go:141] libmachine: (multinode-342677) DBG | domain multinode-342677 has defined IP address 192.168.39.10 and MAC address 52:54:00:90:94:1a in network mk-multinode-342677
	I0805 23:46:15.361446   47941 main.go:141] libmachine: (multinode-342677) Calling .GetSSHPort
	I0805 23:46:15.361654   47941 main.go:141] libmachine: (multinode-342677) Calling .GetSSHKeyPath
	I0805 23:46:15.361830   47941 main.go:141] libmachine: (multinode-342677) Calling .GetSSHUsername
	I0805 23:46:15.361979   47941 sshutil.go:53] new ssh client: &{IP:192.168.39.10 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19373-9606/.minikube/machines/multinode-342677/id_rsa Username:docker}
	I0805 23:46:15.450587   47941 ssh_runner.go:195] Run: cat /etc/os-release
	I0805 23:46:15.455090   47941 command_runner.go:130] > NAME=Buildroot
	I0805 23:46:15.455114   47941 command_runner.go:130] > VERSION=2023.02.9-dirty
	I0805 23:46:15.455122   47941 command_runner.go:130] > ID=buildroot
	I0805 23:46:15.455130   47941 command_runner.go:130] > VERSION_ID=2023.02.9
	I0805 23:46:15.455144   47941 command_runner.go:130] > PRETTY_NAME="Buildroot 2023.02.9"
	I0805 23:46:15.455425   47941 info.go:137] Remote host: Buildroot 2023.02.9
	I0805 23:46:15.455450   47941 filesync.go:126] Scanning /home/jenkins/minikube-integration/19373-9606/.minikube/addons for local assets ...
	I0805 23:46:15.455518   47941 filesync.go:126] Scanning /home/jenkins/minikube-integration/19373-9606/.minikube/files for local assets ...
	I0805 23:46:15.455626   47941 filesync.go:149] local asset: /home/jenkins/minikube-integration/19373-9606/.minikube/files/etc/ssl/certs/167922.pem -> 167922.pem in /etc/ssl/certs
	I0805 23:46:15.455639   47941 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19373-9606/.minikube/files/etc/ssl/certs/167922.pem -> /etc/ssl/certs/167922.pem
	I0805 23:46:15.455746   47941 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0805 23:46:15.465622   47941 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19373-9606/.minikube/files/etc/ssl/certs/167922.pem --> /etc/ssl/certs/167922.pem (1708 bytes)
	I0805 23:46:15.490589   47941 start.go:296] duration metric: took 133.742358ms for postStartSetup
	I0805 23:46:15.490637   47941 fix.go:56] duration metric: took 1m31.664980969s for fixHost
	I0805 23:46:15.490660   47941 main.go:141] libmachine: (multinode-342677) Calling .GetSSHHostname
	I0805 23:46:15.493250   47941 main.go:141] libmachine: (multinode-342677) DBG | domain multinode-342677 has defined MAC address 52:54:00:90:94:1a in network mk-multinode-342677
	I0805 23:46:15.493616   47941 main.go:141] libmachine: (multinode-342677) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:90:94:1a", ip: ""} in network mk-multinode-342677: {Iface:virbr1 ExpiryTime:2024-08-06 00:39:05 +0000 UTC Type:0 Mac:52:54:00:90:94:1a Iaid: IPaddr:192.168.39.10 Prefix:24 Hostname:multinode-342677 Clientid:01:52:54:00:90:94:1a}
	I0805 23:46:15.493646   47941 main.go:141] libmachine: (multinode-342677) DBG | domain multinode-342677 has defined IP address 192.168.39.10 and MAC address 52:54:00:90:94:1a in network mk-multinode-342677
	I0805 23:46:15.493780   47941 main.go:141] libmachine: (multinode-342677) Calling .GetSSHPort
	I0805 23:46:15.493986   47941 main.go:141] libmachine: (multinode-342677) Calling .GetSSHKeyPath
	I0805 23:46:15.494156   47941 main.go:141] libmachine: (multinode-342677) Calling .GetSSHKeyPath
	I0805 23:46:15.494262   47941 main.go:141] libmachine: (multinode-342677) Calling .GetSSHUsername
	I0805 23:46:15.494392   47941 main.go:141] libmachine: Using SSH client type: native
	I0805 23:46:15.494560   47941 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.10 22 <nil> <nil>}
	I0805 23:46:15.494572   47941 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0805 23:46:15.608152   47941 main.go:141] libmachine: SSH cmd err, output: <nil>: 1722901575.590121821
	
	I0805 23:46:15.608179   47941 fix.go:216] guest clock: 1722901575.590121821
	I0805 23:46:15.608189   47941 fix.go:229] Guest: 2024-08-05 23:46:15.590121821 +0000 UTC Remote: 2024-08-05 23:46:15.490642413 +0000 UTC m=+91.794317759 (delta=99.479408ms)
	I0805 23:46:15.608229   47941 fix.go:200] guest clock delta is within tolerance: 99.479408ms
	I0805 23:46:15.608240   47941 start.go:83] releasing machines lock for "multinode-342677", held for 1m31.782599766s
	I0805 23:46:15.608263   47941 main.go:141] libmachine: (multinode-342677) Calling .DriverName
	I0805 23:46:15.608521   47941 main.go:141] libmachine: (multinode-342677) Calling .GetIP
	I0805 23:46:15.611183   47941 main.go:141] libmachine: (multinode-342677) DBG | domain multinode-342677 has defined MAC address 52:54:00:90:94:1a in network mk-multinode-342677
	I0805 23:46:15.611602   47941 main.go:141] libmachine: (multinode-342677) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:90:94:1a", ip: ""} in network mk-multinode-342677: {Iface:virbr1 ExpiryTime:2024-08-06 00:39:05 +0000 UTC Type:0 Mac:52:54:00:90:94:1a Iaid: IPaddr:192.168.39.10 Prefix:24 Hostname:multinode-342677 Clientid:01:52:54:00:90:94:1a}
	I0805 23:46:15.611633   47941 main.go:141] libmachine: (multinode-342677) DBG | domain multinode-342677 has defined IP address 192.168.39.10 and MAC address 52:54:00:90:94:1a in network mk-multinode-342677
	I0805 23:46:15.611829   47941 main.go:141] libmachine: (multinode-342677) Calling .DriverName
	I0805 23:46:15.612479   47941 main.go:141] libmachine: (multinode-342677) Calling .DriverName
	I0805 23:46:15.612680   47941 main.go:141] libmachine: (multinode-342677) Calling .DriverName
	I0805 23:46:15.612764   47941 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0805 23:46:15.612821   47941 main.go:141] libmachine: (multinode-342677) Calling .GetSSHHostname
	I0805 23:46:15.612917   47941 ssh_runner.go:195] Run: cat /version.json
	I0805 23:46:15.612943   47941 main.go:141] libmachine: (multinode-342677) Calling .GetSSHHostname
	I0805 23:46:15.615515   47941 main.go:141] libmachine: (multinode-342677) DBG | domain multinode-342677 has defined MAC address 52:54:00:90:94:1a in network mk-multinode-342677
	I0805 23:46:15.615789   47941 main.go:141] libmachine: (multinode-342677) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:90:94:1a", ip: ""} in network mk-multinode-342677: {Iface:virbr1 ExpiryTime:2024-08-06 00:39:05 +0000 UTC Type:0 Mac:52:54:00:90:94:1a Iaid: IPaddr:192.168.39.10 Prefix:24 Hostname:multinode-342677 Clientid:01:52:54:00:90:94:1a}
	I0805 23:46:15.615816   47941 main.go:141] libmachine: (multinode-342677) DBG | domain multinode-342677 has defined IP address 192.168.39.10 and MAC address 52:54:00:90:94:1a in network mk-multinode-342677
	I0805 23:46:15.615904   47941 main.go:141] libmachine: (multinode-342677) DBG | domain multinode-342677 has defined MAC address 52:54:00:90:94:1a in network mk-multinode-342677
	I0805 23:46:15.615952   47941 main.go:141] libmachine: (multinode-342677) Calling .GetSSHPort
	I0805 23:46:15.616123   47941 main.go:141] libmachine: (multinode-342677) Calling .GetSSHKeyPath
	I0805 23:46:15.616257   47941 main.go:141] libmachine: (multinode-342677) Calling .GetSSHUsername
	I0805 23:46:15.616325   47941 main.go:141] libmachine: (multinode-342677) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:90:94:1a", ip: ""} in network mk-multinode-342677: {Iface:virbr1 ExpiryTime:2024-08-06 00:39:05 +0000 UTC Type:0 Mac:52:54:00:90:94:1a Iaid: IPaddr:192.168.39.10 Prefix:24 Hostname:multinode-342677 Clientid:01:52:54:00:90:94:1a}
	I0805 23:46:15.616349   47941 main.go:141] libmachine: (multinode-342677) DBG | domain multinode-342677 has defined IP address 192.168.39.10 and MAC address 52:54:00:90:94:1a in network mk-multinode-342677
	I0805 23:46:15.616407   47941 sshutil.go:53] new ssh client: &{IP:192.168.39.10 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19373-9606/.minikube/machines/multinode-342677/id_rsa Username:docker}
	I0805 23:46:15.616527   47941 main.go:141] libmachine: (multinode-342677) Calling .GetSSHPort
	I0805 23:46:15.616677   47941 main.go:141] libmachine: (multinode-342677) Calling .GetSSHKeyPath
	I0805 23:46:15.616796   47941 main.go:141] libmachine: (multinode-342677) Calling .GetSSHUsername
	I0805 23:46:15.616939   47941 sshutil.go:53] new ssh client: &{IP:192.168.39.10 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19373-9606/.minikube/machines/multinode-342677/id_rsa Username:docker}
	I0805 23:46:15.714535   47941 command_runner.go:130] > <a href="https://github.com/kubernetes/registry.k8s.io">Temporary Redirect</a>.
	I0805 23:46:15.715316   47941 command_runner.go:130] > {"iso_version": "v1.33.1-1722248113-19339", "kicbase_version": "v0.0.44-1721902582-19326", "minikube_version": "v1.33.1", "commit": "b8389556a97747a5bbaa1906d238251ad536d76e"}
	I0805 23:46:15.715470   47941 ssh_runner.go:195] Run: systemctl --version
	I0805 23:46:15.721764   47941 command_runner.go:130] > systemd 252 (252)
	I0805 23:46:15.721823   47941 command_runner.go:130] > -PAM -AUDIT -SELINUX -APPARMOR -IMA -SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL +ACL +BLKID +CURL -ELFUTILS -FIDO2 -IDN2 -IDN +IPTC +KMOD -LIBCRYPTSETUP +LIBFDISK -PCRE2 -PWQUALITY -P11KIT -QRENCODE -TPM2 -BZIP2 +LZ4 +XZ +ZLIB -ZSTD -BPF_FRAMEWORK -XKBCOMMON -UTMP -SYSVINIT default-hierarchy=unified
	I0805 23:46:15.721895   47941 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0805 23:46:15.891119   47941 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0805 23:46:15.899097   47941 command_runner.go:130] ! stat: cannot statx '/etc/cni/net.d/*loopback.conf*': No such file or directory
	W0805 23:46:15.899479   47941 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0805 23:46:15.899547   47941 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0805 23:46:15.910259   47941 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I0805 23:46:15.910289   47941 start.go:495] detecting cgroup driver to use...
	I0805 23:46:15.910365   47941 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0805 23:46:15.927314   47941 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0805 23:46:15.942300   47941 docker.go:217] disabling cri-docker service (if available) ...
	I0805 23:46:15.942355   47941 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0805 23:46:15.955916   47941 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0805 23:46:15.969657   47941 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0805 23:46:16.110031   47941 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0805 23:46:16.259973   47941 docker.go:233] disabling docker service ...
	I0805 23:46:16.260053   47941 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0805 23:46:16.280842   47941 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0805 23:46:16.295418   47941 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0805 23:46:16.451997   47941 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0805 23:46:16.612750   47941 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0805 23:46:16.627787   47941 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0805 23:46:16.647576   47941 command_runner.go:130] > runtime-endpoint: unix:///var/run/crio/crio.sock
	I0805 23:46:16.648113   47941 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I0805 23:46:16.648185   47941 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0805 23:46:16.659491   47941 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0805 23:46:16.659581   47941 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0805 23:46:16.670628   47941 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0805 23:46:16.682194   47941 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0805 23:46:16.693080   47941 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0805 23:46:16.704700   47941 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0805 23:46:16.716395   47941 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0805 23:46:16.727662   47941 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0805 23:46:16.738079   47941 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0805 23:46:16.748597   47941 command_runner.go:130] > net.bridge.bridge-nf-call-iptables = 1
	I0805 23:46:16.748674   47941 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0805 23:46:16.758318   47941 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0805 23:46:16.894240   47941 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0805 23:46:24.560307   47941 ssh_runner.go:235] Completed: sudo systemctl restart crio: (7.66602585s)
	I0805 23:46:24.560338   47941 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0805 23:46:24.560390   47941 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0805 23:46:24.565642   47941 command_runner.go:130] >   File: /var/run/crio/crio.sock
	I0805 23:46:24.565665   47941 command_runner.go:130] >   Size: 0         	Blocks: 0          IO Block: 4096   socket
	I0805 23:46:24.565680   47941 command_runner.go:130] > Device: 0,22	Inode: 1345        Links: 1
	I0805 23:46:24.565692   47941 command_runner.go:130] > Access: (0660/srw-rw----)  Uid: (    0/    root)   Gid: (    0/    root)
	I0805 23:46:24.565704   47941 command_runner.go:130] > Access: 2024-08-05 23:46:24.431503312 +0000
	I0805 23:46:24.565718   47941 command_runner.go:130] > Modify: 2024-08-05 23:46:24.431503312 +0000
	I0805 23:46:24.565729   47941 command_runner.go:130] > Change: 2024-08-05 23:46:24.431503312 +0000
	I0805 23:46:24.565736   47941 command_runner.go:130] >  Birth: -
	I0805 23:46:24.565964   47941 start.go:563] Will wait 60s for crictl version
	I0805 23:46:24.566014   47941 ssh_runner.go:195] Run: which crictl
	I0805 23:46:24.569764   47941 command_runner.go:130] > /usr/bin/crictl
	I0805 23:46:24.569908   47941 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0805 23:46:24.610487   47941 command_runner.go:130] > Version:  0.1.0
	I0805 23:46:24.610513   47941 command_runner.go:130] > RuntimeName:  cri-o
	I0805 23:46:24.610520   47941 command_runner.go:130] > RuntimeVersion:  1.29.1
	I0805 23:46:24.610527   47941 command_runner.go:130] > RuntimeApiVersion:  v1
	I0805 23:46:24.610563   47941 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0805 23:46:24.610627   47941 ssh_runner.go:195] Run: crio --version
	I0805 23:46:24.641066   47941 command_runner.go:130] > crio version 1.29.1
	I0805 23:46:24.641089   47941 command_runner.go:130] > Version:        1.29.1
	I0805 23:46:24.641096   47941 command_runner.go:130] > GitCommit:      unknown
	I0805 23:46:24.641102   47941 command_runner.go:130] > GitCommitDate:  unknown
	I0805 23:46:24.641108   47941 command_runner.go:130] > GitTreeState:   clean
	I0805 23:46:24.641115   47941 command_runner.go:130] > BuildDate:      2024-07-29T16:04:01Z
	I0805 23:46:24.641121   47941 command_runner.go:130] > GoVersion:      go1.21.6
	I0805 23:46:24.641127   47941 command_runner.go:130] > Compiler:       gc
	I0805 23:46:24.641132   47941 command_runner.go:130] > Platform:       linux/amd64
	I0805 23:46:24.641143   47941 command_runner.go:130] > Linkmode:       dynamic
	I0805 23:46:24.641150   47941 command_runner.go:130] > BuildTags:      
	I0805 23:46:24.641156   47941 command_runner.go:130] >   containers_image_ostree_stub
	I0805 23:46:24.641163   47941 command_runner.go:130] >   exclude_graphdriver_btrfs
	I0805 23:46:24.641169   47941 command_runner.go:130] >   btrfs_noversion
	I0805 23:46:24.641182   47941 command_runner.go:130] >   exclude_graphdriver_devicemapper
	I0805 23:46:24.641188   47941 command_runner.go:130] >   libdm_no_deferred_remove
	I0805 23:46:24.641194   47941 command_runner.go:130] >   seccomp
	I0805 23:46:24.641204   47941 command_runner.go:130] > LDFlags:          unknown
	I0805 23:46:24.641225   47941 command_runner.go:130] > SeccompEnabled:   true
	I0805 23:46:24.641235   47941 command_runner.go:130] > AppArmorEnabled:  false
	I0805 23:46:24.641309   47941 ssh_runner.go:195] Run: crio --version
	I0805 23:46:24.669500   47941 command_runner.go:130] > crio version 1.29.1
	I0805 23:46:24.669527   47941 command_runner.go:130] > Version:        1.29.1
	I0805 23:46:24.669532   47941 command_runner.go:130] > GitCommit:      unknown
	I0805 23:46:24.669537   47941 command_runner.go:130] > GitCommitDate:  unknown
	I0805 23:46:24.669541   47941 command_runner.go:130] > GitTreeState:   clean
	I0805 23:46:24.669546   47941 command_runner.go:130] > BuildDate:      2024-07-29T16:04:01Z
	I0805 23:46:24.669550   47941 command_runner.go:130] > GoVersion:      go1.21.6
	I0805 23:46:24.669553   47941 command_runner.go:130] > Compiler:       gc
	I0805 23:46:24.669558   47941 command_runner.go:130] > Platform:       linux/amd64
	I0805 23:46:24.669562   47941 command_runner.go:130] > Linkmode:       dynamic
	I0805 23:46:24.669566   47941 command_runner.go:130] > BuildTags:      
	I0805 23:46:24.669570   47941 command_runner.go:130] >   containers_image_ostree_stub
	I0805 23:46:24.669574   47941 command_runner.go:130] >   exclude_graphdriver_btrfs
	I0805 23:46:24.669578   47941 command_runner.go:130] >   btrfs_noversion
	I0805 23:46:24.669582   47941 command_runner.go:130] >   exclude_graphdriver_devicemapper
	I0805 23:46:24.669587   47941 command_runner.go:130] >   libdm_no_deferred_remove
	I0805 23:46:24.669590   47941 command_runner.go:130] >   seccomp
	I0805 23:46:24.669594   47941 command_runner.go:130] > LDFlags:          unknown
	I0805 23:46:24.669598   47941 command_runner.go:130] > SeccompEnabled:   true
	I0805 23:46:24.669604   47941 command_runner.go:130] > AppArmorEnabled:  false
	I0805 23:46:24.673031   47941 out.go:177] * Preparing Kubernetes v1.30.3 on CRI-O 1.29.1 ...
	I0805 23:46:24.674650   47941 main.go:141] libmachine: (multinode-342677) Calling .GetIP
	I0805 23:46:24.677360   47941 main.go:141] libmachine: (multinode-342677) DBG | domain multinode-342677 has defined MAC address 52:54:00:90:94:1a in network mk-multinode-342677
	I0805 23:46:24.677741   47941 main.go:141] libmachine: (multinode-342677) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:90:94:1a", ip: ""} in network mk-multinode-342677: {Iface:virbr1 ExpiryTime:2024-08-06 00:39:05 +0000 UTC Type:0 Mac:52:54:00:90:94:1a Iaid: IPaddr:192.168.39.10 Prefix:24 Hostname:multinode-342677 Clientid:01:52:54:00:90:94:1a}
	I0805 23:46:24.677775   47941 main.go:141] libmachine: (multinode-342677) DBG | domain multinode-342677 has defined IP address 192.168.39.10 and MAC address 52:54:00:90:94:1a in network mk-multinode-342677
	I0805 23:46:24.678016   47941 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0805 23:46:24.682396   47941 command_runner.go:130] > 192.168.39.1	host.minikube.internal
	I0805 23:46:24.682569   47941 kubeadm.go:883] updating cluster {Name:multinode-342677 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19339/minikube-v1.33.1-1722248113-19339-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.
30.3 ClusterName:multinode-342677 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.10 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.89 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:false Worker:true} {Name:m03 IP:192.168.39.75 Port:0 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false
inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: Disable
Optimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0805 23:46:24.682702   47941 preload.go:131] Checking if preload exists for k8s version v1.30.3 and runtime crio
	I0805 23:46:24.682746   47941 ssh_runner.go:195] Run: sudo crictl images --output json
	I0805 23:46:24.728631   47941 command_runner.go:130] > {
	I0805 23:46:24.728657   47941 command_runner.go:130] >   "images": [
	I0805 23:46:24.728661   47941 command_runner.go:130] >     {
	I0805 23:46:24.728669   47941 command_runner.go:130] >       "id": "5cc3abe5717dbf8e0db6fc2e890c3f82fe95e2ce5c5d7320f98a5b71c767d42f",
	I0805 23:46:24.728673   47941 command_runner.go:130] >       "repoTags": [
	I0805 23:46:24.728679   47941 command_runner.go:130] >         "docker.io/kindest/kindnetd:v20240715-585640e9"
	I0805 23:46:24.728683   47941 command_runner.go:130] >       ],
	I0805 23:46:24.728687   47941 command_runner.go:130] >       "repoDigests": [
	I0805 23:46:24.728694   47941 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:3b93f681916ee780a9941d48cb20622486c08af54f8d87d801412bcca0832115",
	I0805 23:46:24.728701   47941 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:88ed2adbc140254762f98fad7f4b16d279117356ebaf95aebf191713c828a493"
	I0805 23:46:24.728705   47941 command_runner.go:130] >       ],
	I0805 23:46:24.728709   47941 command_runner.go:130] >       "size": "87165492",
	I0805 23:46:24.728715   47941 command_runner.go:130] >       "uid": null,
	I0805 23:46:24.728719   47941 command_runner.go:130] >       "username": "",
	I0805 23:46:24.728725   47941 command_runner.go:130] >       "spec": null,
	I0805 23:46:24.728729   47941 command_runner.go:130] >       "pinned": false
	I0805 23:46:24.728732   47941 command_runner.go:130] >     },
	I0805 23:46:24.728736   47941 command_runner.go:130] >     {
	I0805 23:46:24.728742   47941 command_runner.go:130] >       "id": "917d7814b9b5b870a04ef7bbee5414485a7ba8385be9c26e1df27463f6fb6557",
	I0805 23:46:24.728749   47941 command_runner.go:130] >       "repoTags": [
	I0805 23:46:24.728754   47941 command_runner.go:130] >         "docker.io/kindest/kindnetd:v20240730-75a5af0c"
	I0805 23:46:24.728758   47941 command_runner.go:130] >       ],
	I0805 23:46:24.728762   47941 command_runner.go:130] >       "repoDigests": [
	I0805 23:46:24.728769   47941 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:4067b91686869e19bac601aec305ba55d2e74cdcb91347869bfb4fd3a26cd3c3",
	I0805 23:46:24.728778   47941 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:60b58d454ebdf7f0f66e7550fc2be7b6f08dee8b39bedd62c26d42c3a5cf5c6a"
	I0805 23:46:24.728781   47941 command_runner.go:130] >       ],
	I0805 23:46:24.728785   47941 command_runner.go:130] >       "size": "87165492",
	I0805 23:46:24.728789   47941 command_runner.go:130] >       "uid": null,
	I0805 23:46:24.728795   47941 command_runner.go:130] >       "username": "",
	I0805 23:46:24.728801   47941 command_runner.go:130] >       "spec": null,
	I0805 23:46:24.728804   47941 command_runner.go:130] >       "pinned": false
	I0805 23:46:24.728810   47941 command_runner.go:130] >     },
	I0805 23:46:24.728816   47941 command_runner.go:130] >     {
	I0805 23:46:24.728822   47941 command_runner.go:130] >       "id": "8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a",
	I0805 23:46:24.728826   47941 command_runner.go:130] >       "repoTags": [
	I0805 23:46:24.728831   47941 command_runner.go:130] >         "gcr.io/k8s-minikube/busybox:1.28"
	I0805 23:46:24.728834   47941 command_runner.go:130] >       ],
	I0805 23:46:24.728838   47941 command_runner.go:130] >       "repoDigests": [
	I0805 23:46:24.728845   47941 command_runner.go:130] >         "gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335",
	I0805 23:46:24.728853   47941 command_runner.go:130] >         "gcr.io/k8s-minikube/busybox@sha256:9afb80db71730dbb303fe00765cbf34bddbdc6b66e49897fc2e1861967584b12"
	I0805 23:46:24.728857   47941 command_runner.go:130] >       ],
	I0805 23:46:24.728861   47941 command_runner.go:130] >       "size": "1363676",
	I0805 23:46:24.728865   47941 command_runner.go:130] >       "uid": null,
	I0805 23:46:24.728870   47941 command_runner.go:130] >       "username": "",
	I0805 23:46:24.728875   47941 command_runner.go:130] >       "spec": null,
	I0805 23:46:24.728881   47941 command_runner.go:130] >       "pinned": false
	I0805 23:46:24.728886   47941 command_runner.go:130] >     },
	I0805 23:46:24.728890   47941 command_runner.go:130] >     {
	I0805 23:46:24.728895   47941 command_runner.go:130] >       "id": "6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562",
	I0805 23:46:24.728902   47941 command_runner.go:130] >       "repoTags": [
	I0805 23:46:24.728907   47941 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner:v5"
	I0805 23:46:24.728912   47941 command_runner.go:130] >       ],
	I0805 23:46:24.728916   47941 command_runner.go:130] >       "repoDigests": [
	I0805 23:46:24.728923   47941 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944",
	I0805 23:46:24.728936   47941 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner@sha256:c4c05d6ad6c0f24d87b39e596d4dddf64bec3e0d84f5b36e4511d4ebf583f38f"
	I0805 23:46:24.728939   47941 command_runner.go:130] >       ],
	I0805 23:46:24.728944   47941 command_runner.go:130] >       "size": "31470524",
	I0805 23:46:24.728949   47941 command_runner.go:130] >       "uid": null,
	I0805 23:46:24.728953   47941 command_runner.go:130] >       "username": "",
	I0805 23:46:24.728958   47941 command_runner.go:130] >       "spec": null,
	I0805 23:46:24.728961   47941 command_runner.go:130] >       "pinned": false
	I0805 23:46:24.728964   47941 command_runner.go:130] >     },
	I0805 23:46:24.728968   47941 command_runner.go:130] >     {
	I0805 23:46:24.728973   47941 command_runner.go:130] >       "id": "cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4",
	I0805 23:46:24.728978   47941 command_runner.go:130] >       "repoTags": [
	I0805 23:46:24.728983   47941 command_runner.go:130] >         "registry.k8s.io/coredns/coredns:v1.11.1"
	I0805 23:46:24.728989   47941 command_runner.go:130] >       ],
	I0805 23:46:24.728993   47941 command_runner.go:130] >       "repoDigests": [
	I0805 23:46:24.729000   47941 command_runner.go:130] >         "registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1",
	I0805 23:46:24.729010   47941 command_runner.go:130] >         "registry.k8s.io/coredns/coredns@sha256:2169b3b96af988cf69d7dd69efbcc59433eb027320eb185c6110e0850b997870"
	I0805 23:46:24.729013   47941 command_runner.go:130] >       ],
	I0805 23:46:24.729016   47941 command_runner.go:130] >       "size": "61245718",
	I0805 23:46:24.729020   47941 command_runner.go:130] >       "uid": null,
	I0805 23:46:24.729024   47941 command_runner.go:130] >       "username": "nonroot",
	I0805 23:46:24.729028   47941 command_runner.go:130] >       "spec": null,
	I0805 23:46:24.729032   47941 command_runner.go:130] >       "pinned": false
	I0805 23:46:24.729035   47941 command_runner.go:130] >     },
	I0805 23:46:24.729039   47941 command_runner.go:130] >     {
	I0805 23:46:24.729045   47941 command_runner.go:130] >       "id": "3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899",
	I0805 23:46:24.729049   47941 command_runner.go:130] >       "repoTags": [
	I0805 23:46:24.729053   47941 command_runner.go:130] >         "registry.k8s.io/etcd:3.5.12-0"
	I0805 23:46:24.729056   47941 command_runner.go:130] >       ],
	I0805 23:46:24.729069   47941 command_runner.go:130] >       "repoDigests": [
	I0805 23:46:24.729078   47941 command_runner.go:130] >         "registry.k8s.io/etcd@sha256:2e6b9c67730f1f1dce4c6e16d60135e00608728567f537e8ff70c244756cbb62",
	I0805 23:46:24.729084   47941 command_runner.go:130] >         "registry.k8s.io/etcd@sha256:44a8e24dcbba3470ee1fee21d5e88d128c936e9b55d4bc51fbef8086f8ed123b"
	I0805 23:46:24.729090   47941 command_runner.go:130] >       ],
	I0805 23:46:24.729094   47941 command_runner.go:130] >       "size": "150779692",
	I0805 23:46:24.729098   47941 command_runner.go:130] >       "uid": {
	I0805 23:46:24.729103   47941 command_runner.go:130] >         "value": "0"
	I0805 23:46:24.729107   47941 command_runner.go:130] >       },
	I0805 23:46:24.729111   47941 command_runner.go:130] >       "username": "",
	I0805 23:46:24.729115   47941 command_runner.go:130] >       "spec": null,
	I0805 23:46:24.729119   47941 command_runner.go:130] >       "pinned": false
	I0805 23:46:24.729122   47941 command_runner.go:130] >     },
	I0805 23:46:24.729125   47941 command_runner.go:130] >     {
	I0805 23:46:24.729131   47941 command_runner.go:130] >       "id": "1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d",
	I0805 23:46:24.729154   47941 command_runner.go:130] >       "repoTags": [
	I0805 23:46:24.729165   47941 command_runner.go:130] >         "registry.k8s.io/kube-apiserver:v1.30.3"
	I0805 23:46:24.729168   47941 command_runner.go:130] >       ],
	I0805 23:46:24.729172   47941 command_runner.go:130] >       "repoDigests": [
	I0805 23:46:24.729178   47941 command_runner.go:130] >         "registry.k8s.io/kube-apiserver@sha256:a36d558835e48950f6d13b1edbe20605b8dfbc81e088f58221796631e107966c",
	I0805 23:46:24.729186   47941 command_runner.go:130] >         "registry.k8s.io/kube-apiserver@sha256:a3a6c80030a6e720734ae3291448388f70b6f1d463f103e4f06f358f8a170315"
	I0805 23:46:24.729189   47941 command_runner.go:130] >       ],
	I0805 23:46:24.729195   47941 command_runner.go:130] >       "size": "117609954",
	I0805 23:46:24.729201   47941 command_runner.go:130] >       "uid": {
	I0805 23:46:24.729205   47941 command_runner.go:130] >         "value": "0"
	I0805 23:46:24.729208   47941 command_runner.go:130] >       },
	I0805 23:46:24.729212   47941 command_runner.go:130] >       "username": "",
	I0805 23:46:24.729215   47941 command_runner.go:130] >       "spec": null,
	I0805 23:46:24.729219   47941 command_runner.go:130] >       "pinned": false
	I0805 23:46:24.729223   47941 command_runner.go:130] >     },
	I0805 23:46:24.729226   47941 command_runner.go:130] >     {
	I0805 23:46:24.729232   47941 command_runner.go:130] >       "id": "76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e",
	I0805 23:46:24.729238   47941 command_runner.go:130] >       "repoTags": [
	I0805 23:46:24.729244   47941 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager:v1.30.3"
	I0805 23:46:24.729247   47941 command_runner.go:130] >       ],
	I0805 23:46:24.729252   47941 command_runner.go:130] >       "repoDigests": [
	I0805 23:46:24.729266   47941 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager@sha256:eff43da55a29a5e66ec9480f28233d733a6a8433b7a46f6e8c07086fa4ef69b7",
	I0805 23:46:24.729276   47941 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager@sha256:fa179d147c6bacddd1586f6d12ff79a844e951c7b159fdcb92cdf56f3033d91e"
	I0805 23:46:24.729280   47941 command_runner.go:130] >       ],
	I0805 23:46:24.729286   47941 command_runner.go:130] >       "size": "112198984",
	I0805 23:46:24.729291   47941 command_runner.go:130] >       "uid": {
	I0805 23:46:24.729295   47941 command_runner.go:130] >         "value": "0"
	I0805 23:46:24.729298   47941 command_runner.go:130] >       },
	I0805 23:46:24.729302   47941 command_runner.go:130] >       "username": "",
	I0805 23:46:24.729306   47941 command_runner.go:130] >       "spec": null,
	I0805 23:46:24.729310   47941 command_runner.go:130] >       "pinned": false
	I0805 23:46:24.729313   47941 command_runner.go:130] >     },
	I0805 23:46:24.729316   47941 command_runner.go:130] >     {
	I0805 23:46:24.729322   47941 command_runner.go:130] >       "id": "55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1",
	I0805 23:46:24.729325   47941 command_runner.go:130] >       "repoTags": [
	I0805 23:46:24.729329   47941 command_runner.go:130] >         "registry.k8s.io/kube-proxy:v1.30.3"
	I0805 23:46:24.729332   47941 command_runner.go:130] >       ],
	I0805 23:46:24.729336   47941 command_runner.go:130] >       "repoDigests": [
	I0805 23:46:24.729345   47941 command_runner.go:130] >         "registry.k8s.io/kube-proxy@sha256:8c178447597867a03bbcdf0d1ce43fc8f6807ead2321bd1ec0e845a2f12dad80",
	I0805 23:46:24.729353   47941 command_runner.go:130] >         "registry.k8s.io/kube-proxy@sha256:b26e535e8ee1cbd7dc5642fb61bd36e9d23f32e9242ae0010b2905656e664f65"
	I0805 23:46:24.729358   47941 command_runner.go:130] >       ],
	I0805 23:46:24.729362   47941 command_runner.go:130] >       "size": "85953945",
	I0805 23:46:24.729366   47941 command_runner.go:130] >       "uid": null,
	I0805 23:46:24.729370   47941 command_runner.go:130] >       "username": "",
	I0805 23:46:24.729374   47941 command_runner.go:130] >       "spec": null,
	I0805 23:46:24.729378   47941 command_runner.go:130] >       "pinned": false
	I0805 23:46:24.729381   47941 command_runner.go:130] >     },
	I0805 23:46:24.729384   47941 command_runner.go:130] >     {
	I0805 23:46:24.729390   47941 command_runner.go:130] >       "id": "3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2",
	I0805 23:46:24.729395   47941 command_runner.go:130] >       "repoTags": [
	I0805 23:46:24.729400   47941 command_runner.go:130] >         "registry.k8s.io/kube-scheduler:v1.30.3"
	I0805 23:46:24.729403   47941 command_runner.go:130] >       ],
	I0805 23:46:24.729407   47941 command_runner.go:130] >       "repoDigests": [
	I0805 23:46:24.729417   47941 command_runner.go:130] >         "registry.k8s.io/kube-scheduler@sha256:1738178fb116d10e7cde2cfc3671f5dfdad518d773677af740483f2dfe674266",
	I0805 23:46:24.729424   47941 command_runner.go:130] >         "registry.k8s.io/kube-scheduler@sha256:2147ab5d2c73dd84e28332fcbee6826d1648eed30a531a52a96501b37d7ee4e4"
	I0805 23:46:24.729430   47941 command_runner.go:130] >       ],
	I0805 23:46:24.729434   47941 command_runner.go:130] >       "size": "63051080",
	I0805 23:46:24.729437   47941 command_runner.go:130] >       "uid": {
	I0805 23:46:24.729441   47941 command_runner.go:130] >         "value": "0"
	I0805 23:46:24.729444   47941 command_runner.go:130] >       },
	I0805 23:46:24.729448   47941 command_runner.go:130] >       "username": "",
	I0805 23:46:24.729454   47941 command_runner.go:130] >       "spec": null,
	I0805 23:46:24.729458   47941 command_runner.go:130] >       "pinned": false
	I0805 23:46:24.729462   47941 command_runner.go:130] >     },
	I0805 23:46:24.729465   47941 command_runner.go:130] >     {
	I0805 23:46:24.729471   47941 command_runner.go:130] >       "id": "e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c",
	I0805 23:46:24.729477   47941 command_runner.go:130] >       "repoTags": [
	I0805 23:46:24.729481   47941 command_runner.go:130] >         "registry.k8s.io/pause:3.9"
	I0805 23:46:24.729484   47941 command_runner.go:130] >       ],
	I0805 23:46:24.729488   47941 command_runner.go:130] >       "repoDigests": [
	I0805 23:46:24.729494   47941 command_runner.go:130] >         "registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097",
	I0805 23:46:24.729501   47941 command_runner.go:130] >         "registry.k8s.io/pause@sha256:8d4106c88ec0bd28001e34c975d65175d994072d65341f62a8ab0754b0fafe10"
	I0805 23:46:24.729504   47941 command_runner.go:130] >       ],
	I0805 23:46:24.729508   47941 command_runner.go:130] >       "size": "750414",
	I0805 23:46:24.729512   47941 command_runner.go:130] >       "uid": {
	I0805 23:46:24.729516   47941 command_runner.go:130] >         "value": "65535"
	I0805 23:46:24.729521   47941 command_runner.go:130] >       },
	I0805 23:46:24.729525   47941 command_runner.go:130] >       "username": "",
	I0805 23:46:24.729529   47941 command_runner.go:130] >       "spec": null,
	I0805 23:46:24.729536   47941 command_runner.go:130] >       "pinned": true
	I0805 23:46:24.729541   47941 command_runner.go:130] >     }
	I0805 23:46:24.729545   47941 command_runner.go:130] >   ]
	I0805 23:46:24.729548   47941 command_runner.go:130] > }
	I0805 23:46:24.730333   47941 crio.go:514] all images are preloaded for cri-o runtime.
	I0805 23:46:24.730346   47941 crio.go:433] Images already preloaded, skipping extraction
	I0805 23:46:24.730397   47941 ssh_runner.go:195] Run: sudo crictl images --output json
	I0805 23:46:24.766588   47941 command_runner.go:130] > {
	I0805 23:46:24.766612   47941 command_runner.go:130] >   "images": [
	I0805 23:46:24.766618   47941 command_runner.go:130] >     {
	I0805 23:46:24.766629   47941 command_runner.go:130] >       "id": "5cc3abe5717dbf8e0db6fc2e890c3f82fe95e2ce5c5d7320f98a5b71c767d42f",
	I0805 23:46:24.766634   47941 command_runner.go:130] >       "repoTags": [
	I0805 23:46:24.766640   47941 command_runner.go:130] >         "docker.io/kindest/kindnetd:v20240715-585640e9"
	I0805 23:46:24.766643   47941 command_runner.go:130] >       ],
	I0805 23:46:24.766647   47941 command_runner.go:130] >       "repoDigests": [
	I0805 23:46:24.766654   47941 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:3b93f681916ee780a9941d48cb20622486c08af54f8d87d801412bcca0832115",
	I0805 23:46:24.766663   47941 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:88ed2adbc140254762f98fad7f4b16d279117356ebaf95aebf191713c828a493"
	I0805 23:46:24.766667   47941 command_runner.go:130] >       ],
	I0805 23:46:24.766671   47941 command_runner.go:130] >       "size": "87165492",
	I0805 23:46:24.766676   47941 command_runner.go:130] >       "uid": null,
	I0805 23:46:24.766680   47941 command_runner.go:130] >       "username": "",
	I0805 23:46:24.766687   47941 command_runner.go:130] >       "spec": null,
	I0805 23:46:24.766693   47941 command_runner.go:130] >       "pinned": false
	I0805 23:46:24.766697   47941 command_runner.go:130] >     },
	I0805 23:46:24.766701   47941 command_runner.go:130] >     {
	I0805 23:46:24.766707   47941 command_runner.go:130] >       "id": "917d7814b9b5b870a04ef7bbee5414485a7ba8385be9c26e1df27463f6fb6557",
	I0805 23:46:24.766714   47941 command_runner.go:130] >       "repoTags": [
	I0805 23:46:24.766719   47941 command_runner.go:130] >         "docker.io/kindest/kindnetd:v20240730-75a5af0c"
	I0805 23:46:24.766723   47941 command_runner.go:130] >       ],
	I0805 23:46:24.766727   47941 command_runner.go:130] >       "repoDigests": [
	I0805 23:46:24.766734   47941 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:4067b91686869e19bac601aec305ba55d2e74cdcb91347869bfb4fd3a26cd3c3",
	I0805 23:46:24.766741   47941 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:60b58d454ebdf7f0f66e7550fc2be7b6f08dee8b39bedd62c26d42c3a5cf5c6a"
	I0805 23:46:24.766744   47941 command_runner.go:130] >       ],
	I0805 23:46:24.766749   47941 command_runner.go:130] >       "size": "87165492",
	I0805 23:46:24.766755   47941 command_runner.go:130] >       "uid": null,
	I0805 23:46:24.766761   47941 command_runner.go:130] >       "username": "",
	I0805 23:46:24.766764   47941 command_runner.go:130] >       "spec": null,
	I0805 23:46:24.766771   47941 command_runner.go:130] >       "pinned": false
	I0805 23:46:24.766773   47941 command_runner.go:130] >     },
	I0805 23:46:24.766776   47941 command_runner.go:130] >     {
	I0805 23:46:24.766782   47941 command_runner.go:130] >       "id": "8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a",
	I0805 23:46:24.766786   47941 command_runner.go:130] >       "repoTags": [
	I0805 23:46:24.766791   47941 command_runner.go:130] >         "gcr.io/k8s-minikube/busybox:1.28"
	I0805 23:46:24.766794   47941 command_runner.go:130] >       ],
	I0805 23:46:24.766798   47941 command_runner.go:130] >       "repoDigests": [
	I0805 23:46:24.766805   47941 command_runner.go:130] >         "gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335",
	I0805 23:46:24.766816   47941 command_runner.go:130] >         "gcr.io/k8s-minikube/busybox@sha256:9afb80db71730dbb303fe00765cbf34bddbdc6b66e49897fc2e1861967584b12"
	I0805 23:46:24.766823   47941 command_runner.go:130] >       ],
	I0805 23:46:24.766829   47941 command_runner.go:130] >       "size": "1363676",
	I0805 23:46:24.766838   47941 command_runner.go:130] >       "uid": null,
	I0805 23:46:24.766842   47941 command_runner.go:130] >       "username": "",
	I0805 23:46:24.766857   47941 command_runner.go:130] >       "spec": null,
	I0805 23:46:24.766860   47941 command_runner.go:130] >       "pinned": false
	I0805 23:46:24.766863   47941 command_runner.go:130] >     },
	I0805 23:46:24.766867   47941 command_runner.go:130] >     {
	I0805 23:46:24.766875   47941 command_runner.go:130] >       "id": "6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562",
	I0805 23:46:24.766882   47941 command_runner.go:130] >       "repoTags": [
	I0805 23:46:24.766887   47941 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner:v5"
	I0805 23:46:24.766893   47941 command_runner.go:130] >       ],
	I0805 23:46:24.766897   47941 command_runner.go:130] >       "repoDigests": [
	I0805 23:46:24.766906   47941 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944",
	I0805 23:46:24.766918   47941 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner@sha256:c4c05d6ad6c0f24d87b39e596d4dddf64bec3e0d84f5b36e4511d4ebf583f38f"
	I0805 23:46:24.766923   47941 command_runner.go:130] >       ],
	I0805 23:46:24.766928   47941 command_runner.go:130] >       "size": "31470524",
	I0805 23:46:24.766934   47941 command_runner.go:130] >       "uid": null,
	I0805 23:46:24.766938   47941 command_runner.go:130] >       "username": "",
	I0805 23:46:24.766944   47941 command_runner.go:130] >       "spec": null,
	I0805 23:46:24.766948   47941 command_runner.go:130] >       "pinned": false
	I0805 23:46:24.766954   47941 command_runner.go:130] >     },
	I0805 23:46:24.766957   47941 command_runner.go:130] >     {
	I0805 23:46:24.766965   47941 command_runner.go:130] >       "id": "cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4",
	I0805 23:46:24.766969   47941 command_runner.go:130] >       "repoTags": [
	I0805 23:46:24.766976   47941 command_runner.go:130] >         "registry.k8s.io/coredns/coredns:v1.11.1"
	I0805 23:46:24.766980   47941 command_runner.go:130] >       ],
	I0805 23:46:24.766986   47941 command_runner.go:130] >       "repoDigests": [
	I0805 23:46:24.766993   47941 command_runner.go:130] >         "registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1",
	I0805 23:46:24.767003   47941 command_runner.go:130] >         "registry.k8s.io/coredns/coredns@sha256:2169b3b96af988cf69d7dd69efbcc59433eb027320eb185c6110e0850b997870"
	I0805 23:46:24.767009   47941 command_runner.go:130] >       ],
	I0805 23:46:24.767013   47941 command_runner.go:130] >       "size": "61245718",
	I0805 23:46:24.767018   47941 command_runner.go:130] >       "uid": null,
	I0805 23:46:24.767023   47941 command_runner.go:130] >       "username": "nonroot",
	I0805 23:46:24.767029   47941 command_runner.go:130] >       "spec": null,
	I0805 23:46:24.767033   47941 command_runner.go:130] >       "pinned": false
	I0805 23:46:24.767038   47941 command_runner.go:130] >     },
	I0805 23:46:24.767042   47941 command_runner.go:130] >     {
	I0805 23:46:24.767056   47941 command_runner.go:130] >       "id": "3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899",
	I0805 23:46:24.767072   47941 command_runner.go:130] >       "repoTags": [
	I0805 23:46:24.767077   47941 command_runner.go:130] >         "registry.k8s.io/etcd:3.5.12-0"
	I0805 23:46:24.767080   47941 command_runner.go:130] >       ],
	I0805 23:46:24.767084   47941 command_runner.go:130] >       "repoDigests": [
	I0805 23:46:24.767093   47941 command_runner.go:130] >         "registry.k8s.io/etcd@sha256:2e6b9c67730f1f1dce4c6e16d60135e00608728567f537e8ff70c244756cbb62",
	I0805 23:46:24.767100   47941 command_runner.go:130] >         "registry.k8s.io/etcd@sha256:44a8e24dcbba3470ee1fee21d5e88d128c936e9b55d4bc51fbef8086f8ed123b"
	I0805 23:46:24.767106   47941 command_runner.go:130] >       ],
	I0805 23:46:24.767111   47941 command_runner.go:130] >       "size": "150779692",
	I0805 23:46:24.767117   47941 command_runner.go:130] >       "uid": {
	I0805 23:46:24.767121   47941 command_runner.go:130] >         "value": "0"
	I0805 23:46:24.767129   47941 command_runner.go:130] >       },
	I0805 23:46:24.767133   47941 command_runner.go:130] >       "username": "",
	I0805 23:46:24.767139   47941 command_runner.go:130] >       "spec": null,
	I0805 23:46:24.767143   47941 command_runner.go:130] >       "pinned": false
	I0805 23:46:24.767149   47941 command_runner.go:130] >     },
	I0805 23:46:24.767155   47941 command_runner.go:130] >     {
	I0805 23:46:24.767163   47941 command_runner.go:130] >       "id": "1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d",
	I0805 23:46:24.767169   47941 command_runner.go:130] >       "repoTags": [
	I0805 23:46:24.767174   47941 command_runner.go:130] >         "registry.k8s.io/kube-apiserver:v1.30.3"
	I0805 23:46:24.767180   47941 command_runner.go:130] >       ],
	I0805 23:46:24.767184   47941 command_runner.go:130] >       "repoDigests": [
	I0805 23:46:24.767193   47941 command_runner.go:130] >         "registry.k8s.io/kube-apiserver@sha256:a36d558835e48950f6d13b1edbe20605b8dfbc81e088f58221796631e107966c",
	I0805 23:46:24.767202   47941 command_runner.go:130] >         "registry.k8s.io/kube-apiserver@sha256:a3a6c80030a6e720734ae3291448388f70b6f1d463f103e4f06f358f8a170315"
	I0805 23:46:24.767208   47941 command_runner.go:130] >       ],
	I0805 23:46:24.767212   47941 command_runner.go:130] >       "size": "117609954",
	I0805 23:46:24.767218   47941 command_runner.go:130] >       "uid": {
	I0805 23:46:24.767222   47941 command_runner.go:130] >         "value": "0"
	I0805 23:46:24.767228   47941 command_runner.go:130] >       },
	I0805 23:46:24.767232   47941 command_runner.go:130] >       "username": "",
	I0805 23:46:24.767238   47941 command_runner.go:130] >       "spec": null,
	I0805 23:46:24.767244   47941 command_runner.go:130] >       "pinned": false
	I0805 23:46:24.767250   47941 command_runner.go:130] >     },
	I0805 23:46:24.767253   47941 command_runner.go:130] >     {
	I0805 23:46:24.767261   47941 command_runner.go:130] >       "id": "76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e",
	I0805 23:46:24.767267   47941 command_runner.go:130] >       "repoTags": [
	I0805 23:46:24.767272   47941 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager:v1.30.3"
	I0805 23:46:24.767278   47941 command_runner.go:130] >       ],
	I0805 23:46:24.767281   47941 command_runner.go:130] >       "repoDigests": [
	I0805 23:46:24.767296   47941 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager@sha256:eff43da55a29a5e66ec9480f28233d733a6a8433b7a46f6e8c07086fa4ef69b7",
	I0805 23:46:24.767306   47941 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager@sha256:fa179d147c6bacddd1586f6d12ff79a844e951c7b159fdcb92cdf56f3033d91e"
	I0805 23:46:24.767311   47941 command_runner.go:130] >       ],
	I0805 23:46:24.767316   47941 command_runner.go:130] >       "size": "112198984",
	I0805 23:46:24.767322   47941 command_runner.go:130] >       "uid": {
	I0805 23:46:24.767326   47941 command_runner.go:130] >         "value": "0"
	I0805 23:46:24.767332   47941 command_runner.go:130] >       },
	I0805 23:46:24.767336   47941 command_runner.go:130] >       "username": "",
	I0805 23:46:24.767342   47941 command_runner.go:130] >       "spec": null,
	I0805 23:46:24.767346   47941 command_runner.go:130] >       "pinned": false
	I0805 23:46:24.767351   47941 command_runner.go:130] >     },
	I0805 23:46:24.767354   47941 command_runner.go:130] >     {
	I0805 23:46:24.767362   47941 command_runner.go:130] >       "id": "55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1",
	I0805 23:46:24.767368   47941 command_runner.go:130] >       "repoTags": [
	I0805 23:46:24.767373   47941 command_runner.go:130] >         "registry.k8s.io/kube-proxy:v1.30.3"
	I0805 23:46:24.767378   47941 command_runner.go:130] >       ],
	I0805 23:46:24.767383   47941 command_runner.go:130] >       "repoDigests": [
	I0805 23:46:24.767391   47941 command_runner.go:130] >         "registry.k8s.io/kube-proxy@sha256:8c178447597867a03bbcdf0d1ce43fc8f6807ead2321bd1ec0e845a2f12dad80",
	I0805 23:46:24.767402   47941 command_runner.go:130] >         "registry.k8s.io/kube-proxy@sha256:b26e535e8ee1cbd7dc5642fb61bd36e9d23f32e9242ae0010b2905656e664f65"
	I0805 23:46:24.767407   47941 command_runner.go:130] >       ],
	I0805 23:46:24.767411   47941 command_runner.go:130] >       "size": "85953945",
	I0805 23:46:24.767417   47941 command_runner.go:130] >       "uid": null,
	I0805 23:46:24.767421   47941 command_runner.go:130] >       "username": "",
	I0805 23:46:24.767426   47941 command_runner.go:130] >       "spec": null,
	I0805 23:46:24.767430   47941 command_runner.go:130] >       "pinned": false
	I0805 23:46:24.767435   47941 command_runner.go:130] >     },
	I0805 23:46:24.767439   47941 command_runner.go:130] >     {
	I0805 23:46:24.767445   47941 command_runner.go:130] >       "id": "3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2",
	I0805 23:46:24.767451   47941 command_runner.go:130] >       "repoTags": [
	I0805 23:46:24.767456   47941 command_runner.go:130] >         "registry.k8s.io/kube-scheduler:v1.30.3"
	I0805 23:46:24.767461   47941 command_runner.go:130] >       ],
	I0805 23:46:24.767465   47941 command_runner.go:130] >       "repoDigests": [
	I0805 23:46:24.767474   47941 command_runner.go:130] >         "registry.k8s.io/kube-scheduler@sha256:1738178fb116d10e7cde2cfc3671f5dfdad518d773677af740483f2dfe674266",
	I0805 23:46:24.767481   47941 command_runner.go:130] >         "registry.k8s.io/kube-scheduler@sha256:2147ab5d2c73dd84e28332fcbee6826d1648eed30a531a52a96501b37d7ee4e4"
	I0805 23:46:24.767487   47941 command_runner.go:130] >       ],
	I0805 23:46:24.767491   47941 command_runner.go:130] >       "size": "63051080",
	I0805 23:46:24.767497   47941 command_runner.go:130] >       "uid": {
	I0805 23:46:24.767501   47941 command_runner.go:130] >         "value": "0"
	I0805 23:46:24.767506   47941 command_runner.go:130] >       },
	I0805 23:46:24.767510   47941 command_runner.go:130] >       "username": "",
	I0805 23:46:24.767516   47941 command_runner.go:130] >       "spec": null,
	I0805 23:46:24.767520   47941 command_runner.go:130] >       "pinned": false
	I0805 23:46:24.767525   47941 command_runner.go:130] >     },
	I0805 23:46:24.767529   47941 command_runner.go:130] >     {
	I0805 23:46:24.767537   47941 command_runner.go:130] >       "id": "e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c",
	I0805 23:46:24.767542   47941 command_runner.go:130] >       "repoTags": [
	I0805 23:46:24.767546   47941 command_runner.go:130] >         "registry.k8s.io/pause:3.9"
	I0805 23:46:24.767551   47941 command_runner.go:130] >       ],
	I0805 23:46:24.767555   47941 command_runner.go:130] >       "repoDigests": [
	I0805 23:46:24.767564   47941 command_runner.go:130] >         "registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097",
	I0805 23:46:24.767572   47941 command_runner.go:130] >         "registry.k8s.io/pause@sha256:8d4106c88ec0bd28001e34c975d65175d994072d65341f62a8ab0754b0fafe10"
	I0805 23:46:24.767578   47941 command_runner.go:130] >       ],
	I0805 23:46:24.767582   47941 command_runner.go:130] >       "size": "750414",
	I0805 23:46:24.767589   47941 command_runner.go:130] >       "uid": {
	I0805 23:46:24.767593   47941 command_runner.go:130] >         "value": "65535"
	I0805 23:46:24.767597   47941 command_runner.go:130] >       },
	I0805 23:46:24.767601   47941 command_runner.go:130] >       "username": "",
	I0805 23:46:24.767606   47941 command_runner.go:130] >       "spec": null,
	I0805 23:46:24.767610   47941 command_runner.go:130] >       "pinned": true
	I0805 23:46:24.767616   47941 command_runner.go:130] >     }
	I0805 23:46:24.767619   47941 command_runner.go:130] >   ]
	I0805 23:46:24.767624   47941 command_runner.go:130] > }
	I0805 23:46:24.767731   47941 crio.go:514] all images are preloaded for cri-o runtime.
	I0805 23:46:24.767740   47941 cache_images.go:84] Images are preloaded, skipping loading
	I0805 23:46:24.767747   47941 kubeadm.go:934] updating node { 192.168.39.10 8443 v1.30.3 crio true true} ...
	I0805 23:46:24.767840   47941 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.30.3/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=multinode-342677 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.10
	
	[Install]
	 config:
	{KubernetesVersion:v1.30.3 ClusterName:multinode-342677 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0805 23:46:24.767900   47941 ssh_runner.go:195] Run: crio config
	I0805 23:46:24.801234   47941 command_runner.go:130] ! time="2024-08-05 23:46:24.782826247Z" level=info msg="Starting CRI-O, version: 1.29.1, git: unknown(clean)"
	I0805 23:46:24.808009   47941 command_runner.go:130] ! level=info msg="Using default capabilities: CAP_CHOWN, CAP_DAC_OVERRIDE, CAP_FSETID, CAP_FOWNER, CAP_SETGID, CAP_SETUID, CAP_SETPCAP, CAP_NET_BIND_SERVICE, CAP_KILL"
	I0805 23:46:24.812926   47941 command_runner.go:130] > # The CRI-O configuration file specifies all of the available configuration
	I0805 23:46:24.812951   47941 command_runner.go:130] > # options and command-line flags for the crio(8) OCI Kubernetes Container Runtime
	I0805 23:46:24.812960   47941 command_runner.go:130] > # daemon, but in a TOML format that can be more easily modified and versioned.
	I0805 23:46:24.812965   47941 command_runner.go:130] > #
	I0805 23:46:24.812975   47941 command_runner.go:130] > # Please refer to crio.conf(5) for details of all configuration options.
	I0805 23:46:24.812985   47941 command_runner.go:130] > # CRI-O supports partial configuration reload during runtime, which can be
	I0805 23:46:24.812994   47941 command_runner.go:130] > # done by sending SIGHUP to the running process. Currently supported options
	I0805 23:46:24.813001   47941 command_runner.go:130] > # are explicitly mentioned with: 'This option supports live configuration
	I0805 23:46:24.813006   47941 command_runner.go:130] > # reload'.
	I0805 23:46:24.813011   47941 command_runner.go:130] > # CRI-O reads its storage defaults from the containers-storage.conf(5) file
	I0805 23:46:24.813020   47941 command_runner.go:130] > # located at /etc/containers/storage.conf. Modify this storage configuration if
	I0805 23:46:24.813026   47941 command_runner.go:130] > # you want to change the system's defaults. If you want to modify storage just
	I0805 23:46:24.813034   47941 command_runner.go:130] > # for CRI-O, you can change the storage configuration options here.
	I0805 23:46:24.813037   47941 command_runner.go:130] > [crio]
	I0805 23:46:24.813043   47941 command_runner.go:130] > # Path to the "root directory". CRI-O stores all of its data, including
	I0805 23:46:24.813050   47941 command_runner.go:130] > # containers images, in this directory.
	I0805 23:46:24.813054   47941 command_runner.go:130] > root = "/var/lib/containers/storage"
	I0805 23:46:24.813066   47941 command_runner.go:130] > # Path to the "run directory". CRI-O stores all of its state in this directory.
	I0805 23:46:24.813073   47941 command_runner.go:130] > runroot = "/var/run/containers/storage"
	I0805 23:46:24.813081   47941 command_runner.go:130] > # Path to the "imagestore". If CRI-O stores all of its images in this directory differently than Root.
	I0805 23:46:24.813087   47941 command_runner.go:130] > # imagestore = ""
	I0805 23:46:24.813093   47941 command_runner.go:130] > # Storage driver used to manage the storage of images and containers. Please
	I0805 23:46:24.813101   47941 command_runner.go:130] > # refer to containers-storage.conf(5) to see all available storage drivers.
	I0805 23:46:24.813105   47941 command_runner.go:130] > storage_driver = "overlay"
	I0805 23:46:24.813111   47941 command_runner.go:130] > # List to pass options to the storage driver. Please refer to
	I0805 23:46:24.813116   47941 command_runner.go:130] > # containers-storage.conf(5) to see all available storage options.
	I0805 23:46:24.813140   47941 command_runner.go:130] > storage_option = [
	I0805 23:46:24.813153   47941 command_runner.go:130] > 	"overlay.mountopt=nodev,metacopy=on",
	I0805 23:46:24.813157   47941 command_runner.go:130] > ]
	I0805 23:46:24.813167   47941 command_runner.go:130] > # The default log directory where all logs will go unless directly specified by
	I0805 23:46:24.813180   47941 command_runner.go:130] > # the kubelet. The log directory specified must be an absolute directory.
	I0805 23:46:24.813188   47941 command_runner.go:130] > # log_dir = "/var/log/crio/pods"
	I0805 23:46:24.813196   47941 command_runner.go:130] > # Location for CRI-O to lay down the temporary version file.
	I0805 23:46:24.813203   47941 command_runner.go:130] > # It is used to check if crio wipe should wipe containers, which should
	I0805 23:46:24.813211   47941 command_runner.go:130] > # always happen on a node reboot
	I0805 23:46:24.813218   47941 command_runner.go:130] > # version_file = "/var/run/crio/version"
	I0805 23:46:24.813227   47941 command_runner.go:130] > # Location for CRI-O to lay down the persistent version file.
	I0805 23:46:24.813236   47941 command_runner.go:130] > # It is used to check if crio wipe should wipe images, which should
	I0805 23:46:24.813246   47941 command_runner.go:130] > # only happen when CRI-O has been upgraded
	I0805 23:46:24.813257   47941 command_runner.go:130] > version_file_persist = "/var/lib/crio/version"
	I0805 23:46:24.813273   47941 command_runner.go:130] > # InternalWipe is whether CRI-O should wipe containers and images after a reboot when the server starts.
	I0805 23:46:24.813288   47941 command_runner.go:130] > # If set to false, one must use the external command 'crio wipe' to wipe the containers and images in these situations.
	I0805 23:46:24.813296   47941 command_runner.go:130] > # internal_wipe = true
	I0805 23:46:24.813305   47941 command_runner.go:130] > # InternalRepair is whether CRI-O should check if the container and image storage was corrupted after a sudden restart.
	I0805 23:46:24.813313   47941 command_runner.go:130] > # If it was, CRI-O also attempts to repair the storage.
	I0805 23:46:24.813317   47941 command_runner.go:130] > # internal_repair = false
	I0805 23:46:24.813323   47941 command_runner.go:130] > # Location for CRI-O to lay down the clean shutdown file.
	I0805 23:46:24.813332   47941 command_runner.go:130] > # It is used to check whether crio had time to sync before shutting down.
	I0805 23:46:24.813343   47941 command_runner.go:130] > # If not found, crio wipe will clear the storage directory.
	I0805 23:46:24.813355   47941 command_runner.go:130] > # clean_shutdown_file = "/var/lib/crio/clean.shutdown"
	I0805 23:46:24.813367   47941 command_runner.go:130] > # The crio.api table contains settings for the kubelet/gRPC interface.
	I0805 23:46:24.813376   47941 command_runner.go:130] > [crio.api]
	I0805 23:46:24.813390   47941 command_runner.go:130] > # Path to AF_LOCAL socket on which CRI-O will listen.
	I0805 23:46:24.813400   47941 command_runner.go:130] > # listen = "/var/run/crio/crio.sock"
	I0805 23:46:24.813409   47941 command_runner.go:130] > # IP address on which the stream server will listen.
	I0805 23:46:24.813416   47941 command_runner.go:130] > # stream_address = "127.0.0.1"
	I0805 23:46:24.813423   47941 command_runner.go:130] > # The port on which the stream server will listen. If the port is set to "0", then
	I0805 23:46:24.813431   47941 command_runner.go:130] > # CRI-O will allocate a random free port number.
	I0805 23:46:24.813440   47941 command_runner.go:130] > # stream_port = "0"
	I0805 23:46:24.813451   47941 command_runner.go:130] > # Enable encrypted TLS transport of the stream server.
	I0805 23:46:24.813458   47941 command_runner.go:130] > # stream_enable_tls = false
	I0805 23:46:24.813470   47941 command_runner.go:130] > # Length of time until open streams terminate due to lack of activity
	I0805 23:46:24.813479   47941 command_runner.go:130] > # stream_idle_timeout = ""
	I0805 23:46:24.813495   47941 command_runner.go:130] > # Path to the x509 certificate file used to serve the encrypted stream. This
	I0805 23:46:24.813507   47941 command_runner.go:130] > # file can change, and CRI-O will automatically pick up the changes within 5
	I0805 23:46:24.813515   47941 command_runner.go:130] > # minutes.
	I0805 23:46:24.813522   47941 command_runner.go:130] > # stream_tls_cert = ""
	I0805 23:46:24.813531   47941 command_runner.go:130] > # Path to the key file used to serve the encrypted stream. This file can
	I0805 23:46:24.813544   47941 command_runner.go:130] > # change and CRI-O will automatically pick up the changes within 5 minutes.
	I0805 23:46:24.813554   47941 command_runner.go:130] > # stream_tls_key = ""
	I0805 23:46:24.813567   47941 command_runner.go:130] > # Path to the x509 CA(s) file used to verify and authenticate client
	I0805 23:46:24.813579   47941 command_runner.go:130] > # communication with the encrypted stream. This file can change and CRI-O will
	I0805 23:46:24.813598   47941 command_runner.go:130] > # automatically pick up the changes within 5 minutes.
	I0805 23:46:24.813605   47941 command_runner.go:130] > # stream_tls_ca = ""
	I0805 23:46:24.813613   47941 command_runner.go:130] > # Maximum grpc send message size in bytes. If not set or <=0, then CRI-O will default to 80 * 1024 * 1024.
	I0805 23:46:24.813623   47941 command_runner.go:130] > grpc_max_send_msg_size = 16777216
	I0805 23:46:24.813637   47941 command_runner.go:130] > # Maximum grpc receive message size. If not set or <= 0, then CRI-O will default to 80 * 1024 * 1024.
	I0805 23:46:24.813649   47941 command_runner.go:130] > grpc_max_recv_msg_size = 16777216
	I0805 23:46:24.813660   47941 command_runner.go:130] > # The crio.runtime table contains settings pertaining to the OCI runtime used
	I0805 23:46:24.813671   47941 command_runner.go:130] > # and options for how to set up and manage the OCI runtime.
	I0805 23:46:24.813678   47941 command_runner.go:130] > [crio.runtime]
	I0805 23:46:24.813688   47941 command_runner.go:130] > # A list of ulimits to be set in containers by default, specified as
	I0805 23:46:24.813699   47941 command_runner.go:130] > # "<ulimit name>=<soft limit>:<hard limit>", for example:
	I0805 23:46:24.813703   47941 command_runner.go:130] > # "nofile=1024:2048"
	I0805 23:46:24.813709   47941 command_runner.go:130] > # If nothing is set here, settings will be inherited from the CRI-O daemon
	I0805 23:46:24.813717   47941 command_runner.go:130] > # default_ulimits = [
	I0805 23:46:24.813723   47941 command_runner.go:130] > # ]
	I0805 23:46:24.813736   47941 command_runner.go:130] > # If true, the runtime will not use pivot_root, but instead use MS_MOVE.
	I0805 23:46:24.813742   47941 command_runner.go:130] > # no_pivot = false
	I0805 23:46:24.813754   47941 command_runner.go:130] > # decryption_keys_path is the path where the keys required for
	I0805 23:46:24.813766   47941 command_runner.go:130] > # image decryption are stored. This option supports live configuration reload.
	I0805 23:46:24.813778   47941 command_runner.go:130] > # decryption_keys_path = "/etc/crio/keys/"
	I0805 23:46:24.813789   47941 command_runner.go:130] > # Path to the conmon binary, used for monitoring the OCI runtime.
	I0805 23:46:24.813799   47941 command_runner.go:130] > # Will be searched for using $PATH if empty.
	I0805 23:46:24.813808   47941 command_runner.go:130] > # This option is currently deprecated, and will be replaced with RuntimeHandler.MonitorEnv.
	I0805 23:46:24.813818   47941 command_runner.go:130] > conmon = "/usr/libexec/crio/conmon"
	I0805 23:46:24.813827   47941 command_runner.go:130] > # Cgroup setting for conmon
	I0805 23:46:24.813840   47941 command_runner.go:130] > # This option is currently deprecated, and will be replaced with RuntimeHandler.MonitorCgroup.
	I0805 23:46:24.813850   47941 command_runner.go:130] > conmon_cgroup = "pod"
	I0805 23:46:24.813860   47941 command_runner.go:130] > # Environment variable list for the conmon process, used for passing necessary
	I0805 23:46:24.813872   47941 command_runner.go:130] > # environment variables to conmon or the runtime.
	I0805 23:46:24.813889   47941 command_runner.go:130] > # This option is currently deprecated, and will be replaced with RuntimeHandler.MonitorEnv.
	I0805 23:46:24.813898   47941 command_runner.go:130] > conmon_env = [
	I0805 23:46:24.813908   47941 command_runner.go:130] > 	"PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin",
	I0805 23:46:24.813915   47941 command_runner.go:130] > ]
	I0805 23:46:24.813924   47941 command_runner.go:130] > # Additional environment variables to set for all the
	I0805 23:46:24.813935   47941 command_runner.go:130] > # containers. These are overridden if set in the
	I0805 23:46:24.813947   47941 command_runner.go:130] > # container image spec or in the container runtime configuration.
	I0805 23:46:24.813957   47941 command_runner.go:130] > # default_env = [
	I0805 23:46:24.813964   47941 command_runner.go:130] > # ]
	I0805 23:46:24.813977   47941 command_runner.go:130] > # If true, SELinux will be used for pod separation on the host.
	I0805 23:46:24.813990   47941 command_runner.go:130] > # This option is deprecated, and be interpreted from whether SELinux is enabled on the host in the future.
	I0805 23:46:24.813998   47941 command_runner.go:130] > # selinux = false
	I0805 23:46:24.814061   47941 command_runner.go:130] > # Path to the seccomp.json profile which is used as the default seccomp profile
	I0805 23:46:24.814084   47941 command_runner.go:130] > # for the runtime. If not specified, then the internal default seccomp profile
	I0805 23:46:24.814093   47941 command_runner.go:130] > # will be used. This option supports live configuration reload.
	I0805 23:46:24.814107   47941 command_runner.go:130] > # seccomp_profile = ""
	I0805 23:46:24.814119   47941 command_runner.go:130] > # Changes the meaning of an empty seccomp profile. By default
	I0805 23:46:24.814131   47941 command_runner.go:130] > # (and according to CRI spec), an empty profile means unconfined.
	I0805 23:46:24.814155   47941 command_runner.go:130] > # This option tells CRI-O to treat an empty profile as the default profile,
	I0805 23:46:24.814165   47941 command_runner.go:130] > # which might increase security.
	I0805 23:46:24.814174   47941 command_runner.go:130] > # This option is currently deprecated,
	I0805 23:46:24.814186   47941 command_runner.go:130] > # and will be replaced by the SeccompDefault FeatureGate in Kubernetes.
	I0805 23:46:24.814197   47941 command_runner.go:130] > seccomp_use_default_when_empty = false
	I0805 23:46:24.814211   47941 command_runner.go:130] > # Used to change the name of the default AppArmor profile of CRI-O. The default
	I0805 23:46:24.814224   47941 command_runner.go:130] > # profile name is "crio-default". This profile only takes effect if the user
	I0805 23:46:24.814237   47941 command_runner.go:130] > # does not specify a profile via the Kubernetes Pod's metadata annotation. If
	I0805 23:46:24.814250   47941 command_runner.go:130] > # the profile is set to "unconfined", then this equals to disabling AppArmor.
	I0805 23:46:24.814261   47941 command_runner.go:130] > # This option supports live configuration reload.
	I0805 23:46:24.814268   47941 command_runner.go:130] > # apparmor_profile = "crio-default"
	I0805 23:46:24.814275   47941 command_runner.go:130] > # Path to the blockio class configuration file for configuring
	I0805 23:46:24.814285   47941 command_runner.go:130] > # the cgroup blockio controller.
	I0805 23:46:24.814296   47941 command_runner.go:130] > # blockio_config_file = ""
	I0805 23:46:24.814310   47941 command_runner.go:130] > # Reload blockio-config-file and rescan blockio devices in the system before applying
	I0805 23:46:24.814320   47941 command_runner.go:130] > # blockio parameters.
	I0805 23:46:24.814329   47941 command_runner.go:130] > # blockio_reload = false
	I0805 23:46:24.814341   47941 command_runner.go:130] > # Used to change irqbalance service config file path which is used for configuring
	I0805 23:46:24.814350   47941 command_runner.go:130] > # irqbalance daemon.
	I0805 23:46:24.814361   47941 command_runner.go:130] > # irqbalance_config_file = "/etc/sysconfig/irqbalance"
	I0805 23:46:24.814374   47941 command_runner.go:130] > # irqbalance_config_restore_file allows to set a cpu mask CRI-O should
	I0805 23:46:24.814387   47941 command_runner.go:130] > # restore as irqbalance config at startup. Set to empty string to disable this flow entirely.
	I0805 23:46:24.814401   47941 command_runner.go:130] > # By default, CRI-O manages the irqbalance configuration to enable dynamic IRQ pinning.
	I0805 23:46:24.814416   47941 command_runner.go:130] > # irqbalance_config_restore_file = "/etc/sysconfig/orig_irq_banned_cpus"
	I0805 23:46:24.814429   47941 command_runner.go:130] > # Path to the RDT configuration file for configuring the resctrl pseudo-filesystem.
	I0805 23:46:24.814441   47941 command_runner.go:130] > # This option supports live configuration reload.
	I0805 23:46:24.814450   47941 command_runner.go:130] > # rdt_config_file = ""
	I0805 23:46:24.814461   47941 command_runner.go:130] > # Cgroup management implementation used for the runtime.
	I0805 23:46:24.814468   47941 command_runner.go:130] > cgroup_manager = "cgroupfs"
	I0805 23:46:24.814499   47941 command_runner.go:130] > # Specify whether the image pull must be performed in a separate cgroup.
	I0805 23:46:24.814510   47941 command_runner.go:130] > # separate_pull_cgroup = ""
	I0805 23:46:24.814521   47941 command_runner.go:130] > # List of default capabilities for containers. If it is empty or commented out,
	I0805 23:46:24.814534   47941 command_runner.go:130] > # only the capabilities defined in the containers json file by the user/kube
	I0805 23:46:24.814543   47941 command_runner.go:130] > # will be added.
	I0805 23:46:24.814552   47941 command_runner.go:130] > # default_capabilities = [
	I0805 23:46:24.814561   47941 command_runner.go:130] > # 	"CHOWN",
	I0805 23:46:24.814570   47941 command_runner.go:130] > # 	"DAC_OVERRIDE",
	I0805 23:46:24.814578   47941 command_runner.go:130] > # 	"FSETID",
	I0805 23:46:24.814581   47941 command_runner.go:130] > # 	"FOWNER",
	I0805 23:46:24.814585   47941 command_runner.go:130] > # 	"SETGID",
	I0805 23:46:24.814593   47941 command_runner.go:130] > # 	"SETUID",
	I0805 23:46:24.814601   47941 command_runner.go:130] > # 	"SETPCAP",
	I0805 23:46:24.814611   47941 command_runner.go:130] > # 	"NET_BIND_SERVICE",
	I0805 23:46:24.814619   47941 command_runner.go:130] > # 	"KILL",
	I0805 23:46:24.814627   47941 command_runner.go:130] > # ]
	I0805 23:46:24.814638   47941 command_runner.go:130] > # Add capabilities to the inheritable set, as well as the default group of permitted, bounding and effective.
	I0805 23:46:24.814651   47941 command_runner.go:130] > # If capabilities are expected to work for non-root users, this option should be set.
	I0805 23:46:24.814661   47941 command_runner.go:130] > # add_inheritable_capabilities = false
	I0805 23:46:24.814670   47941 command_runner.go:130] > # List of default sysctls. If it is empty or commented out, only the sysctls
	I0805 23:46:24.814679   47941 command_runner.go:130] > # defined in the container json file by the user/kube will be added.
	I0805 23:46:24.814688   47941 command_runner.go:130] > default_sysctls = [
	I0805 23:46:24.814696   47941 command_runner.go:130] > 	"net.ipv4.ip_unprivileged_port_start=0",
	I0805 23:46:24.814706   47941 command_runner.go:130] > ]
	I0805 23:46:24.814714   47941 command_runner.go:130] > # List of devices on the host that a
	I0805 23:46:24.814726   47941 command_runner.go:130] > # user can specify with the "io.kubernetes.cri-o.Devices" allowed annotation.
	I0805 23:46:24.814736   47941 command_runner.go:130] > # allowed_devices = [
	I0805 23:46:24.814745   47941 command_runner.go:130] > # 	"/dev/fuse",
	I0805 23:46:24.814750   47941 command_runner.go:130] > # ]
	I0805 23:46:24.814759   47941 command_runner.go:130] > # List of additional devices. specified as
	I0805 23:46:24.814767   47941 command_runner.go:130] > # "<device-on-host>:<device-on-container>:<permissions>", for example: "--device=/dev/sdc:/dev/xvdc:rwm".
	I0805 23:46:24.814778   47941 command_runner.go:130] > # If it is empty or commented out, only the devices
	I0805 23:46:24.814795   47941 command_runner.go:130] > # defined in the container json file by the user/kube will be added.
	I0805 23:46:24.814804   47941 command_runner.go:130] > # additional_devices = [
	I0805 23:46:24.814813   47941 command_runner.go:130] > # ]
	I0805 23:46:24.814825   47941 command_runner.go:130] > # List of directories to scan for CDI Spec files.
	I0805 23:46:24.814834   47941 command_runner.go:130] > # cdi_spec_dirs = [
	I0805 23:46:24.814842   47941 command_runner.go:130] > # 	"/etc/cdi",
	I0805 23:46:24.814851   47941 command_runner.go:130] > # 	"/var/run/cdi",
	I0805 23:46:24.814857   47941 command_runner.go:130] > # ]
	I0805 23:46:24.814863   47941 command_runner.go:130] > # Change the default behavior of setting container devices uid/gid from CRI's
	I0805 23:46:24.814880   47941 command_runner.go:130] > # SecurityContext (RunAsUser/RunAsGroup) instead of taking host's uid/gid.
	I0805 23:46:24.814890   47941 command_runner.go:130] > # Defaults to false.
	I0805 23:46:24.814900   47941 command_runner.go:130] > # device_ownership_from_security_context = false
	I0805 23:46:24.814913   47941 command_runner.go:130] > # Path to OCI hooks directories for automatically executed hooks. If one of the
	I0805 23:46:24.814925   47941 command_runner.go:130] > # directories does not exist, then CRI-O will automatically skip them.
	I0805 23:46:24.814933   47941 command_runner.go:130] > # hooks_dir = [
	I0805 23:46:24.814944   47941 command_runner.go:130] > # 	"/usr/share/containers/oci/hooks.d",
	I0805 23:46:24.814952   47941 command_runner.go:130] > # ]
	I0805 23:46:24.814962   47941 command_runner.go:130] > # Path to the file specifying the defaults mounts for each container. The
	I0805 23:46:24.814972   47941 command_runner.go:130] > # format of the config is /SRC:/DST, one mount per line. Notice that CRI-O reads
	I0805 23:46:24.814988   47941 command_runner.go:130] > # its default mounts from the following two files:
	I0805 23:46:24.814996   47941 command_runner.go:130] > #
	I0805 23:46:24.815012   47941 command_runner.go:130] > #   1) /etc/containers/mounts.conf (i.e., default_mounts_file): This is the
	I0805 23:46:24.815025   47941 command_runner.go:130] > #      override file, where users can either add in their own default mounts, or
	I0805 23:46:24.815036   47941 command_runner.go:130] > #      override the default mounts shipped with the package.
	I0805 23:46:24.815043   47941 command_runner.go:130] > #
	I0805 23:46:24.815063   47941 command_runner.go:130] > #   2) /usr/share/containers/mounts.conf: This is the default file read for
	I0805 23:46:24.815078   47941 command_runner.go:130] > #      mounts. If you want CRI-O to read from a different, specific mounts file,
	I0805 23:46:24.815092   47941 command_runner.go:130] > #      you can change the default_mounts_file. Note, if this is done, CRI-O will
	I0805 23:46:24.815103   47941 command_runner.go:130] > #      only add mounts it finds in this file.
	I0805 23:46:24.815111   47941 command_runner.go:130] > #
	I0805 23:46:24.815119   47941 command_runner.go:130] > # default_mounts_file = ""
	I0805 23:46:24.815130   47941 command_runner.go:130] > # Maximum number of processes allowed in a container.
	I0805 23:46:24.815142   47941 command_runner.go:130] > # This option is deprecated. The Kubelet flag '--pod-pids-limit' should be used instead.
	I0805 23:46:24.815149   47941 command_runner.go:130] > pids_limit = 1024
	I0805 23:46:24.815160   47941 command_runner.go:130] > # Maximum sized allowed for the container log file. Negative numbers indicate
	I0805 23:46:24.815173   47941 command_runner.go:130] > # that no size limit is imposed. If it is positive, it must be >= 8192 to
	I0805 23:46:24.815186   47941 command_runner.go:130] > # match/exceed conmon's read buffer. The file is truncated and re-opened so the
	I0805 23:46:24.815201   47941 command_runner.go:130] > # limit is never exceeded. This option is deprecated. The Kubelet flag '--container-log-max-size' should be used instead.
	I0805 23:46:24.815210   47941 command_runner.go:130] > # log_size_max = -1
	I0805 23:46:24.815224   47941 command_runner.go:130] > # Whether container output should be logged to journald in addition to the kubernetes log file
	I0805 23:46:24.815237   47941 command_runner.go:130] > # log_to_journald = false
	I0805 23:46:24.815245   47941 command_runner.go:130] > # Path to directory in which container exit files are written to by conmon.
	I0805 23:46:24.815256   47941 command_runner.go:130] > # container_exits_dir = "/var/run/crio/exits"
	I0805 23:46:24.815268   47941 command_runner.go:130] > # Path to directory for container attach sockets.
	I0805 23:46:24.815278   47941 command_runner.go:130] > # container_attach_socket_dir = "/var/run/crio"
	I0805 23:46:24.815290   47941 command_runner.go:130] > # The prefix to use for the source of the bind mounts.
	I0805 23:46:24.815300   47941 command_runner.go:130] > # bind_mount_prefix = ""
	I0805 23:46:24.815312   47941 command_runner.go:130] > # If set to true, all containers will run in read-only mode.
	I0805 23:46:24.815321   47941 command_runner.go:130] > # read_only = false
	I0805 23:46:24.815332   47941 command_runner.go:130] > # Changes the verbosity of the logs based on the level it is set to. Options
	I0805 23:46:24.815338   47941 command_runner.go:130] > # are fatal, panic, error, warn, info, debug and trace. This option supports
	I0805 23:46:24.815343   47941 command_runner.go:130] > # live configuration reload.
	I0805 23:46:24.815352   47941 command_runner.go:130] > # log_level = "info"
	I0805 23:46:24.815364   47941 command_runner.go:130] > # Filter the log messages by the provided regular expression.
	I0805 23:46:24.815375   47941 command_runner.go:130] > # This option supports live configuration reload.
	I0805 23:46:24.815385   47941 command_runner.go:130] > # log_filter = ""
	I0805 23:46:24.815396   47941 command_runner.go:130] > # The UID mappings for the user namespace of each container. A range is
	I0805 23:46:24.815411   47941 command_runner.go:130] > # specified in the form containerUID:HostUID:Size. Multiple ranges must be
	I0805 23:46:24.815420   47941 command_runner.go:130] > # separated by comma.
	I0805 23:46:24.815432   47941 command_runner.go:130] > # This option is deprecated, and will be replaced with Kubernetes user namespace support (KEP-127) in the future.
	I0805 23:46:24.815440   47941 command_runner.go:130] > # uid_mappings = ""
	I0805 23:46:24.815454   47941 command_runner.go:130] > # The GID mappings for the user namespace of each container. A range is
	I0805 23:46:24.815467   47941 command_runner.go:130] > # specified in the form containerGID:HostGID:Size. Multiple ranges must be
	I0805 23:46:24.815477   47941 command_runner.go:130] > # separated by comma.
	I0805 23:46:24.815501   47941 command_runner.go:130] > # This option is deprecated, and will be replaced with Kubernetes user namespace support (KEP-127) in the future.
	I0805 23:46:24.815511   47941 command_runner.go:130] > # gid_mappings = ""
	I0805 23:46:24.815523   47941 command_runner.go:130] > # If set, CRI-O will reject any attempt to map host UIDs below this value
	I0805 23:46:24.815533   47941 command_runner.go:130] > # into user namespaces.  A negative value indicates that no minimum is set,
	I0805 23:46:24.815543   47941 command_runner.go:130] > # so specifying mappings will only be allowed for pods that run as UID 0.
	I0805 23:46:24.815558   47941 command_runner.go:130] > # This option is deprecated, and will be replaced with Kubernetes user namespace support (KEP-127) in the future.
	I0805 23:46:24.815569   47941 command_runner.go:130] > # minimum_mappable_uid = -1
	I0805 23:46:24.815579   47941 command_runner.go:130] > # If set, CRI-O will reject any attempt to map host GIDs below this value
	I0805 23:46:24.815592   47941 command_runner.go:130] > # into user namespaces.  A negative value indicates that no minimum is set,
	I0805 23:46:24.815605   47941 command_runner.go:130] > # so specifying mappings will only be allowed for pods that run as UID 0.
	I0805 23:46:24.815620   47941 command_runner.go:130] > # This option is deprecated, and will be replaced with Kubernetes user namespace support (KEP-127) in the future.
	I0805 23:46:24.815633   47941 command_runner.go:130] > # minimum_mappable_gid = -1
	I0805 23:46:24.815642   47941 command_runner.go:130] > # The minimal amount of time in seconds to wait before issuing a timeout
	I0805 23:46:24.815653   47941 command_runner.go:130] > # regarding the proper termination of the container. The lowest possible
	I0805 23:46:24.815666   47941 command_runner.go:130] > # value is 30s, whereas lower values are not considered by CRI-O.
	I0805 23:46:24.815676   47941 command_runner.go:130] > # ctr_stop_timeout = 30
	I0805 23:46:24.815689   47941 command_runner.go:130] > # drop_infra_ctr determines whether CRI-O drops the infra container
	I0805 23:46:24.815701   47941 command_runner.go:130] > # when a pod does not have a private PID namespace, and does not use
	I0805 23:46:24.815713   47941 command_runner.go:130] > # a kernel separating runtime (like kata).
	I0805 23:46:24.815723   47941 command_runner.go:130] > # It requires manage_ns_lifecycle to be true.
	I0805 23:46:24.815732   47941 command_runner.go:130] > drop_infra_ctr = false
	I0805 23:46:24.815740   47941 command_runner.go:130] > # infra_ctr_cpuset determines what CPUs will be used to run infra containers.
	I0805 23:46:24.815753   47941 command_runner.go:130] > # You can use linux CPU list format to specify desired CPUs.
	I0805 23:46:24.815768   47941 command_runner.go:130] > # To get better isolation for guaranteed pods, set this parameter to be equal to kubelet reserved-cpus.
	I0805 23:46:24.815778   47941 command_runner.go:130] > # infra_ctr_cpuset = ""
	I0805 23:46:24.815792   47941 command_runner.go:130] > # shared_cpuset  determines the CPU set which is allowed to be shared between guaranteed containers,
	I0805 23:46:24.815804   47941 command_runner.go:130] > # regardless of, and in addition to, the exclusiveness of their CPUs.
	I0805 23:46:24.815815   47941 command_runner.go:130] > # This field is optional and would not be used if not specified.
	I0805 23:46:24.815827   47941 command_runner.go:130] > # You can specify CPUs in the Linux CPU list format.
	I0805 23:46:24.815833   47941 command_runner.go:130] > # shared_cpuset = ""
	I0805 23:46:24.815840   47941 command_runner.go:130] > # The directory where the state of the managed namespaces gets tracked.
	I0805 23:46:24.815851   47941 command_runner.go:130] > # Only used when manage_ns_lifecycle is true.
	I0805 23:46:24.815861   47941 command_runner.go:130] > # namespaces_dir = "/var/run"
	I0805 23:46:24.815884   47941 command_runner.go:130] > # pinns_path is the path to find the pinns binary, which is needed to manage namespace lifecycle
	I0805 23:46:24.815894   47941 command_runner.go:130] > pinns_path = "/usr/bin/pinns"
	I0805 23:46:24.815907   47941 command_runner.go:130] > # Globally enable/disable CRIU support which is necessary to
	I0805 23:46:24.815919   47941 command_runner.go:130] > # checkpoint and restore container or pods (even if CRIU is found in $PATH).
	I0805 23:46:24.815927   47941 command_runner.go:130] > # enable_criu_support = false
	I0805 23:46:24.815935   47941 command_runner.go:130] > # Enable/disable the generation of the container,
	I0805 23:46:24.815944   47941 command_runner.go:130] > # sandbox lifecycle events to be sent to the Kubelet to optimize the PLEG
	I0805 23:46:24.815954   47941 command_runner.go:130] > # enable_pod_events = false
	I0805 23:46:24.815965   47941 command_runner.go:130] > # default_runtime is the _name_ of the OCI runtime to be used as the default.
	I0805 23:46:24.815979   47941 command_runner.go:130] > # default_runtime is the _name_ of the OCI runtime to be used as the default.
	I0805 23:46:24.815990   47941 command_runner.go:130] > # The name is matched against the runtimes map below.
	I0805 23:46:24.816001   47941 command_runner.go:130] > # default_runtime = "runc"
	I0805 23:46:24.816012   47941 command_runner.go:130] > # A list of paths that, when absent from the host,
	I0805 23:46:24.816025   47941 command_runner.go:130] > # will cause a container creation to fail (as opposed to the current behavior being created as a directory).
	I0805 23:46:24.816039   47941 command_runner.go:130] > # This option is to protect from source locations whose existence as a directory could jeopardize the health of the node, and whose
	I0805 23:46:24.816056   47941 command_runner.go:130] > # creation as a file is not desired either.
	I0805 23:46:24.816073   47941 command_runner.go:130] > # An example is /etc/hostname, which will cause failures on reboot if it's created as a directory, but often doesn't exist because
	I0805 23:46:24.816083   47941 command_runner.go:130] > # the hostname is being managed dynamically.
	I0805 23:46:24.816094   47941 command_runner.go:130] > # absent_mount_sources_to_reject = [
	I0805 23:46:24.816099   47941 command_runner.go:130] > # ]
	I0805 23:46:24.816110   47941 command_runner.go:130] > # The "crio.runtime.runtimes" table defines a list of OCI compatible runtimes.
	I0805 23:46:24.816120   47941 command_runner.go:130] > # The runtime to use is picked based on the runtime handler provided by the CRI.
	I0805 23:46:24.816131   47941 command_runner.go:130] > # If no runtime handler is provided, the "default_runtime" will be used.
	I0805 23:46:24.816141   47941 command_runner.go:130] > # Each entry in the table should follow the format:
	I0805 23:46:24.816149   47941 command_runner.go:130] > #
	I0805 23:46:24.816157   47941 command_runner.go:130] > # [crio.runtime.runtimes.runtime-handler]
	I0805 23:46:24.816168   47941 command_runner.go:130] > # runtime_path = "/path/to/the/executable"
	I0805 23:46:24.816197   47941 command_runner.go:130] > # runtime_type = "oci"
	I0805 23:46:24.816207   47941 command_runner.go:130] > # runtime_root = "/path/to/the/root"
	I0805 23:46:24.816217   47941 command_runner.go:130] > # monitor_path = "/path/to/container/monitor"
	I0805 23:46:24.816224   47941 command_runner.go:130] > # monitor_cgroup = "/cgroup/path"
	I0805 23:46:24.816230   47941 command_runner.go:130] > # monitor_exec_cgroup = "/cgroup/path"
	I0805 23:46:24.816240   47941 command_runner.go:130] > # monitor_env = []
	I0805 23:46:24.816250   47941 command_runner.go:130] > # privileged_without_host_devices = false
	I0805 23:46:24.816257   47941 command_runner.go:130] > # allowed_annotations = []
	I0805 23:46:24.816270   47941 command_runner.go:130] > # platform_runtime_paths = { "os/arch" = "/path/to/binary" }
	I0805 23:46:24.816278   47941 command_runner.go:130] > # Where:
	I0805 23:46:24.816343   47941 command_runner.go:130] > # - runtime-handler: Name used to identify the runtime.
	I0805 23:46:24.816374   47941 command_runner.go:130] > # - runtime_path (optional, string): Absolute path to the runtime executable in
	I0805 23:46:24.816390   47941 command_runner.go:130] > #   the host filesystem. If omitted, the runtime-handler identifier should match
	I0805 23:46:24.816403   47941 command_runner.go:130] > #   the runtime executable name, and the runtime executable should be placed
	I0805 23:46:24.816412   47941 command_runner.go:130] > #   in $PATH.
	I0805 23:46:24.816424   47941 command_runner.go:130] > # - runtime_type (optional, string): Type of runtime, one of: "oci", "vm". If
	I0805 23:46:24.816435   47941 command_runner.go:130] > #   omitted, an "oci" runtime is assumed.
	I0805 23:46:24.816447   47941 command_runner.go:130] > # - runtime_root (optional, string): Root directory for storage of containers
	I0805 23:46:24.816457   47941 command_runner.go:130] > #   state.
	I0805 23:46:24.816471   47941 command_runner.go:130] > # - runtime_config_path (optional, string): the path for the runtime configuration
	I0805 23:46:24.816484   47941 command_runner.go:130] > #   file. This can only be used with when using the VM runtime_type.
	I0805 23:46:24.816497   47941 command_runner.go:130] > # - privileged_without_host_devices (optional, bool): an option for restricting
	I0805 23:46:24.816509   47941 command_runner.go:130] > #   host devices from being passed to privileged containers.
	I0805 23:46:24.816523   47941 command_runner.go:130] > # - allowed_annotations (optional, array of strings): an option for specifying
	I0805 23:46:24.816536   47941 command_runner.go:130] > #   a list of experimental annotations that this runtime handler is allowed to process.
	I0805 23:46:24.816553   47941 command_runner.go:130] > #   The currently recognized values are:
	I0805 23:46:24.816567   47941 command_runner.go:130] > #   "io.kubernetes.cri-o.userns-mode" for configuring a user namespace for the pod.
	I0805 23:46:24.816582   47941 command_runner.go:130] > #   "io.kubernetes.cri-o.cgroup2-mount-hierarchy-rw" for mounting cgroups writably when set to "true".
	I0805 23:46:24.816595   47941 command_runner.go:130] > #   "io.kubernetes.cri-o.Devices" for configuring devices for the pod.
	I0805 23:46:24.816607   47941 command_runner.go:130] > #   "io.kubernetes.cri-o.ShmSize" for configuring the size of /dev/shm.
	I0805 23:46:24.816621   47941 command_runner.go:130] > #   "io.kubernetes.cri-o.UnifiedCgroup.$CTR_NAME" for configuring the cgroup v2 unified block for a container.
	I0805 23:46:24.816634   47941 command_runner.go:130] > #   "io.containers.trace-syscall" for tracing syscalls via the OCI seccomp BPF hook.
	I0805 23:46:24.816645   47941 command_runner.go:130] > #   "io.kubernetes.cri-o.seccompNotifierAction" for enabling the seccomp notifier feature.
	I0805 23:46:24.816655   47941 command_runner.go:130] > #   "io.kubernetes.cri-o.umask" for setting the umask for container init process.
	I0805 23:46:24.816667   47941 command_runner.go:130] > #   "io.kubernetes.cri.rdt-class" for setting the RDT class of a container
	I0805 23:46:24.816680   47941 command_runner.go:130] > # - monitor_path (optional, string): The path of the monitor binary. Replaces
	I0805 23:46:24.816691   47941 command_runner.go:130] > #   deprecated option "conmon".
	I0805 23:46:24.816705   47941 command_runner.go:130] > # - monitor_cgroup (optional, string): The cgroup the container monitor process will be put in.
	I0805 23:46:24.816715   47941 command_runner.go:130] > #   Replaces deprecated option "conmon_cgroup".
	I0805 23:46:24.816729   47941 command_runner.go:130] > # - monitor_exec_cgroup (optional, string): If set to "container", indicates exec probes
	I0805 23:46:24.816739   47941 command_runner.go:130] > #   should be moved to the container's cgroup
	I0805 23:46:24.816749   47941 command_runner.go:130] > # - monitor_env (optional, array of strings): Environment variables to pass to the montior.
	I0805 23:46:24.816759   47941 command_runner.go:130] > #   Replaces deprecated option "conmon_env".
	I0805 23:46:24.816840   47941 command_runner.go:130] > # - platform_runtime_paths (optional, map): A mapping of platforms to the corresponding
	I0805 23:46:24.816850   47941 command_runner.go:130] > #   runtime executable paths for the runtime handler.
	I0805 23:46:24.816855   47941 command_runner.go:130] > #
	I0805 23:46:24.816865   47941 command_runner.go:130] > # Using the seccomp notifier feature:
	I0805 23:46:24.816873   47941 command_runner.go:130] > #
	I0805 23:46:24.816883   47941 command_runner.go:130] > # This feature can help you to debug seccomp related issues, for example if
	I0805 23:46:24.816896   47941 command_runner.go:130] > # blocked syscalls (permission denied errors) have negative impact on the workload.
	I0805 23:46:24.816904   47941 command_runner.go:130] > #
	I0805 23:46:24.816913   47941 command_runner.go:130] > # To be able to use this feature, configure a runtime which has the annotation
	I0805 23:46:24.816926   47941 command_runner.go:130] > # "io.kubernetes.cri-o.seccompNotifierAction" in the allowed_annotations array.
	I0805 23:46:24.816932   47941 command_runner.go:130] > #
	I0805 23:46:24.816940   47941 command_runner.go:130] > # It also requires at least runc 1.1.0 or crun 0.19 which support the notifier
	I0805 23:46:24.816949   47941 command_runner.go:130] > # feature.
	I0805 23:46:24.816957   47941 command_runner.go:130] > #
	I0805 23:46:24.816968   47941 command_runner.go:130] > # If everything is setup, CRI-O will modify chosen seccomp profiles for
	I0805 23:46:24.816980   47941 command_runner.go:130] > # containers if the annotation "io.kubernetes.cri-o.seccompNotifierAction" is
	I0805 23:46:24.816993   47941 command_runner.go:130] > # set on the Pod sandbox. CRI-O will then get notified if a container is using
	I0805 23:46:24.817009   47941 command_runner.go:130] > # a blocked syscall and then terminate the workload after a timeout of 5
	I0805 23:46:24.817021   47941 command_runner.go:130] > # seconds if the value of "io.kubernetes.cri-o.seccompNotifierAction=stop".
	I0805 23:46:24.817028   47941 command_runner.go:130] > #
	I0805 23:46:24.817038   47941 command_runner.go:130] > # This also means that multiple syscalls can be captured during that period,
	I0805 23:46:24.817051   47941 command_runner.go:130] > # while the timeout will get reset once a new syscall has been discovered.
	I0805 23:46:24.817059   47941 command_runner.go:130] > #
	I0805 23:46:24.817069   47941 command_runner.go:130] > # This also means that the Pods "restartPolicy" has to be set to "Never",
	I0805 23:46:24.817082   47941 command_runner.go:130] > # otherwise the kubelet will restart the container immediately.
	I0805 23:46:24.817090   47941 command_runner.go:130] > #
	I0805 23:46:24.817104   47941 command_runner.go:130] > # Please be aware that CRI-O is not able to get notified if a syscall gets
	I0805 23:46:24.817121   47941 command_runner.go:130] > # blocked based on the seccomp defaultAction, which is a general runtime
	I0805 23:46:24.817128   47941 command_runner.go:130] > # limitation.
	I0805 23:46:24.817135   47941 command_runner.go:130] > [crio.runtime.runtimes.runc]
	I0805 23:46:24.817145   47941 command_runner.go:130] > runtime_path = "/usr/bin/runc"
	I0805 23:46:24.817154   47941 command_runner.go:130] > runtime_type = "oci"
	I0805 23:46:24.817161   47941 command_runner.go:130] > runtime_root = "/run/runc"
	I0805 23:46:24.817172   47941 command_runner.go:130] > runtime_config_path = ""
	I0805 23:46:24.817182   47941 command_runner.go:130] > monitor_path = "/usr/libexec/crio/conmon"
	I0805 23:46:24.817191   47941 command_runner.go:130] > monitor_cgroup = "pod"
	I0805 23:46:24.817201   47941 command_runner.go:130] > monitor_exec_cgroup = ""
	I0805 23:46:24.817210   47941 command_runner.go:130] > monitor_env = [
	I0805 23:46:24.817223   47941 command_runner.go:130] > 	"PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin",
	I0805 23:46:24.817228   47941 command_runner.go:130] > ]
	I0805 23:46:24.817235   47941 command_runner.go:130] > privileged_without_host_devices = false
	I0805 23:46:24.817249   47941 command_runner.go:130] > # The workloads table defines ways to customize containers with different resources
	I0805 23:46:24.817261   47941 command_runner.go:130] > # that work based on annotations, rather than the CRI.
	I0805 23:46:24.817274   47941 command_runner.go:130] > # Note, the behavior of this table is EXPERIMENTAL and may change at any time.
	I0805 23:46:24.817289   47941 command_runner.go:130] > # Each workload, has a name, activation_annotation, annotation_prefix and set of resources it supports mutating.
	I0805 23:46:24.817303   47941 command_runner.go:130] > # The currently supported resources are "cpu" (to configure the cpu shares) and "cpuset" to configure the cpuset.
	I0805 23:46:24.817314   47941 command_runner.go:130] > # Each resource can have a default value specified, or be empty.
	I0805 23:46:24.817331   47941 command_runner.go:130] > # For a container to opt-into this workload, the pod should be configured with the annotation $activation_annotation (key only, value is ignored).
	I0805 23:46:24.817344   47941 command_runner.go:130] > # To customize per-container, an annotation of the form $annotation_prefix.$resource/$ctrName = "value" can be specified
	I0805 23:46:24.817354   47941 command_runner.go:130] > # signifying for that resource type to override the default value.
	I0805 23:46:24.817365   47941 command_runner.go:130] > # If the annotation_prefix is not present, every container in the pod will be given the default values.
	I0805 23:46:24.817374   47941 command_runner.go:130] > # Example:
	I0805 23:46:24.817381   47941 command_runner.go:130] > # [crio.runtime.workloads.workload-type]
	I0805 23:46:24.817389   47941 command_runner.go:130] > # activation_annotation = "io.crio/workload"
	I0805 23:46:24.817398   47941 command_runner.go:130] > # annotation_prefix = "io.crio.workload-type"
	I0805 23:46:24.817409   47941 command_runner.go:130] > # [crio.runtime.workloads.workload-type.resources]
	I0805 23:46:24.817413   47941 command_runner.go:130] > # cpuset = 0
	I0805 23:46:24.817417   47941 command_runner.go:130] > # cpushares = "0-1"
	I0805 23:46:24.817420   47941 command_runner.go:130] > # Where:
	I0805 23:46:24.817427   47941 command_runner.go:130] > # The workload name is workload-type.
	I0805 23:46:24.817439   47941 command_runner.go:130] > # To specify, the pod must have the "io.crio.workload" annotation (this is a precise string match).
	I0805 23:46:24.817448   47941 command_runner.go:130] > # This workload supports setting cpuset and cpu resources.
	I0805 23:46:24.817457   47941 command_runner.go:130] > # annotation_prefix is used to customize the different resources.
	I0805 23:46:24.817470   47941 command_runner.go:130] > # To configure the cpu shares a container gets in the example above, the pod would have to have the following annotation:
	I0805 23:46:24.817479   47941 command_runner.go:130] > # "io.crio.workload-type/$container_name = {"cpushares": "value"}"
	I0805 23:46:24.817491   47941 command_runner.go:130] > # hostnetwork_disable_selinux determines whether
	I0805 23:46:24.817501   47941 command_runner.go:130] > # SELinux should be disabled within a pod when it is running in the host network namespace
	I0805 23:46:24.817509   47941 command_runner.go:130] > # Default value is set to true
	I0805 23:46:24.817520   47941 command_runner.go:130] > # hostnetwork_disable_selinux = true
	I0805 23:46:24.817532   47941 command_runner.go:130] > # disable_hostport_mapping determines whether to enable/disable
	I0805 23:46:24.817543   47941 command_runner.go:130] > # the container hostport mapping in CRI-O.
	I0805 23:46:24.817554   47941 command_runner.go:130] > # Default value is set to 'false'
	I0805 23:46:24.817564   47941 command_runner.go:130] > # disable_hostport_mapping = false
	I0805 23:46:24.817576   47941 command_runner.go:130] > # The crio.image table contains settings pertaining to the management of OCI images.
	I0805 23:46:24.817584   47941 command_runner.go:130] > #
	I0805 23:46:24.817596   47941 command_runner.go:130] > # CRI-O reads its configured registries defaults from the system wide
	I0805 23:46:24.817605   47941 command_runner.go:130] > # containers-registries.conf(5) located in /etc/containers/registries.conf. If
	I0805 23:46:24.817616   47941 command_runner.go:130] > # you want to modify just CRI-O, you can change the registries configuration in
	I0805 23:46:24.817629   47941 command_runner.go:130] > # this file. Otherwise, leave insecure_registries and registries commented out to
	I0805 23:46:24.817641   47941 command_runner.go:130] > # use the system's defaults from /etc/containers/registries.conf.
	I0805 23:46:24.817650   47941 command_runner.go:130] > [crio.image]
	I0805 23:46:24.817662   47941 command_runner.go:130] > # Default transport for pulling images from a remote container storage.
	I0805 23:46:24.817671   47941 command_runner.go:130] > # default_transport = "docker://"
	I0805 23:46:24.817685   47941 command_runner.go:130] > # The path to a file containing credentials necessary for pulling images from
	I0805 23:46:24.817695   47941 command_runner.go:130] > # secure registries. The file is similar to that of /var/lib/kubelet/config.json
	I0805 23:46:24.817702   47941 command_runner.go:130] > # global_auth_file = ""
	I0805 23:46:24.817709   47941 command_runner.go:130] > # The image used to instantiate infra containers.
	I0805 23:46:24.817721   47941 command_runner.go:130] > # This option supports live configuration reload.
	I0805 23:46:24.817732   47941 command_runner.go:130] > # pause_image = "registry.k8s.io/pause:3.9"
	I0805 23:46:24.817745   47941 command_runner.go:130] > # The path to a file containing credentials specific for pulling the pause_image from
	I0805 23:46:24.817757   47941 command_runner.go:130] > # above. The file is similar to that of /var/lib/kubelet/config.json
	I0805 23:46:24.817769   47941 command_runner.go:130] > # This option supports live configuration reload.
	I0805 23:46:24.817780   47941 command_runner.go:130] > # pause_image_auth_file = ""
	I0805 23:46:24.817794   47941 command_runner.go:130] > # The command to run to have a container stay in the paused state.
	I0805 23:46:24.817807   47941 command_runner.go:130] > # When explicitly set to "", it will fallback to the entrypoint and command
	I0805 23:46:24.817821   47941 command_runner.go:130] > # specified in the pause image. When commented out, it will fallback to the
	I0805 23:46:24.817832   47941 command_runner.go:130] > # default: "/pause". This option supports live configuration reload.
	I0805 23:46:24.817842   47941 command_runner.go:130] > # pause_command = "/pause"
	I0805 23:46:24.817854   47941 command_runner.go:130] > # List of images to be excluded from the kubelet's garbage collection.
	I0805 23:46:24.817866   47941 command_runner.go:130] > # It allows specifying image names using either exact, glob, or keyword
	I0805 23:46:24.817874   47941 command_runner.go:130] > # patterns. Exact matches must match the entire name, glob matches can
	I0805 23:46:24.817893   47941 command_runner.go:130] > # have a wildcard * at the end, and keyword matches can have wildcards
	I0805 23:46:24.817905   47941 command_runner.go:130] > # on both ends. By default, this list includes the "pause" image if
	I0805 23:46:24.817919   47941 command_runner.go:130] > # configured by the user, which is used as a placeholder in Kubernetes pods.
	I0805 23:46:24.817929   47941 command_runner.go:130] > # pinned_images = [
	I0805 23:46:24.817937   47941 command_runner.go:130] > # ]
	I0805 23:46:24.817949   47941 command_runner.go:130] > # Path to the file which decides what sort of policy we use when deciding
	I0805 23:46:24.817962   47941 command_runner.go:130] > # whether or not to trust an image that we've pulled. It is not recommended that
	I0805 23:46:24.817972   47941 command_runner.go:130] > # this option be used, as the default behavior of using the system-wide default
	I0805 23:46:24.817982   47941 command_runner.go:130] > # policy (i.e., /etc/containers/policy.json) is most often preferred. Please
	I0805 23:46:24.817993   47941 command_runner.go:130] > # refer to containers-policy.json(5) for more details.
	I0805 23:46:24.818003   47941 command_runner.go:130] > # signature_policy = ""
	I0805 23:46:24.818018   47941 command_runner.go:130] > # Root path for pod namespace-separated signature policies.
	I0805 23:46:24.818032   47941 command_runner.go:130] > # The final policy to be used on image pull will be <SIGNATURE_POLICY_DIR>/<NAMESPACE>.json.
	I0805 23:46:24.818045   47941 command_runner.go:130] > # If no pod namespace is being provided on image pull (via the sandbox config),
	I0805 23:46:24.818057   47941 command_runner.go:130] > # or the concatenated path is non existent, then the signature_policy or system
	I0805 23:46:24.818069   47941 command_runner.go:130] > # wide policy will be used as fallback. Must be an absolute path.
	I0805 23:46:24.818076   47941 command_runner.go:130] > # signature_policy_dir = "/etc/crio/policies"
	I0805 23:46:24.818085   47941 command_runner.go:130] > # List of registries to skip TLS verification for pulling images. Please
	I0805 23:46:24.818099   47941 command_runner.go:130] > # consider configuring the registries via /etc/containers/registries.conf before
	I0805 23:46:24.818110   47941 command_runner.go:130] > # changing them here.
	I0805 23:46:24.818120   47941 command_runner.go:130] > # insecure_registries = [
	I0805 23:46:24.818128   47941 command_runner.go:130] > # ]
	I0805 23:46:24.818141   47941 command_runner.go:130] > # Controls how image volumes are handled. The valid values are mkdir, bind and
	I0805 23:46:24.818152   47941 command_runner.go:130] > # ignore; the latter will ignore volumes entirely.
	I0805 23:46:24.818160   47941 command_runner.go:130] > # image_volumes = "mkdir"
	I0805 23:46:24.818165   47941 command_runner.go:130] > # Temporary directory to use for storing big files
	I0805 23:46:24.818174   47941 command_runner.go:130] > # big_files_temporary_dir = ""
	I0805 23:46:24.818195   47941 command_runner.go:130] > # The crio.network table containers settings pertaining to the management of
	I0805 23:46:24.818205   47941 command_runner.go:130] > # CNI plugins.
	I0805 23:46:24.818213   47941 command_runner.go:130] > [crio.network]
	I0805 23:46:24.818225   47941 command_runner.go:130] > # The default CNI network name to be selected. If not set or "", then
	I0805 23:46:24.818237   47941 command_runner.go:130] > # CRI-O will pick-up the first one found in network_dir.
	I0805 23:46:24.818246   47941 command_runner.go:130] > # cni_default_network = ""
	I0805 23:46:24.818254   47941 command_runner.go:130] > # Path to the directory where CNI configuration files are located.
	I0805 23:46:24.818263   47941 command_runner.go:130] > # network_dir = "/etc/cni/net.d/"
	I0805 23:46:24.818275   47941 command_runner.go:130] > # Paths to directories where CNI plugin binaries are located.
	I0805 23:46:24.818285   47941 command_runner.go:130] > # plugin_dirs = [
	I0805 23:46:24.818294   47941 command_runner.go:130] > # 	"/opt/cni/bin/",
	I0805 23:46:24.818303   47941 command_runner.go:130] > # ]
	I0805 23:46:24.818313   47941 command_runner.go:130] > # A necessary configuration for Prometheus based metrics retrieval
	I0805 23:46:24.818322   47941 command_runner.go:130] > [crio.metrics]
	I0805 23:46:24.818332   47941 command_runner.go:130] > # Globally enable or disable metrics support.
	I0805 23:46:24.818339   47941 command_runner.go:130] > enable_metrics = true
	I0805 23:46:24.818344   47941 command_runner.go:130] > # Specify enabled metrics collectors.
	I0805 23:46:24.818354   47941 command_runner.go:130] > # Per default all metrics are enabled.
	I0805 23:46:24.818366   47941 command_runner.go:130] > # It is possible, to prefix the metrics with "container_runtime_" and "crio_".
	I0805 23:46:24.818380   47941 command_runner.go:130] > # For example, the metrics collector "operations" would be treated in the same
	I0805 23:46:24.818392   47941 command_runner.go:130] > # way as "crio_operations" and "container_runtime_crio_operations".
	I0805 23:46:24.818401   47941 command_runner.go:130] > # metrics_collectors = [
	I0805 23:46:24.818411   47941 command_runner.go:130] > # 	"operations",
	I0805 23:46:24.818422   47941 command_runner.go:130] > # 	"operations_latency_microseconds_total",
	I0805 23:46:24.818431   47941 command_runner.go:130] > # 	"operations_latency_microseconds",
	I0805 23:46:24.818436   47941 command_runner.go:130] > # 	"operations_errors",
	I0805 23:46:24.818440   47941 command_runner.go:130] > # 	"image_pulls_by_digest",
	I0805 23:46:24.818450   47941 command_runner.go:130] > # 	"image_pulls_by_name",
	I0805 23:46:24.818461   47941 command_runner.go:130] > # 	"image_pulls_by_name_skipped",
	I0805 23:46:24.818472   47941 command_runner.go:130] > # 	"image_pulls_failures",
	I0805 23:46:24.818483   47941 command_runner.go:130] > # 	"image_pulls_successes",
	I0805 23:46:24.818492   47941 command_runner.go:130] > # 	"image_pulls_layer_size",
	I0805 23:46:24.818502   47941 command_runner.go:130] > # 	"image_layer_reuse",
	I0805 23:46:24.818512   47941 command_runner.go:130] > # 	"containers_events_dropped_total",
	I0805 23:46:24.818521   47941 command_runner.go:130] > # 	"containers_oom_total",
	I0805 23:46:24.818529   47941 command_runner.go:130] > # 	"containers_oom",
	I0805 23:46:24.818533   47941 command_runner.go:130] > # 	"processes_defunct",
	I0805 23:46:24.818539   47941 command_runner.go:130] > # 	"operations_total",
	I0805 23:46:24.818548   47941 command_runner.go:130] > # 	"operations_latency_seconds",
	I0805 23:46:24.818559   47941 command_runner.go:130] > # 	"operations_latency_seconds_total",
	I0805 23:46:24.818567   47941 command_runner.go:130] > # 	"operations_errors_total",
	I0805 23:46:24.818577   47941 command_runner.go:130] > # 	"image_pulls_bytes_total",
	I0805 23:46:24.818587   47941 command_runner.go:130] > # 	"image_pulls_skipped_bytes_total",
	I0805 23:46:24.818596   47941 command_runner.go:130] > # 	"image_pulls_failure_total",
	I0805 23:46:24.818607   47941 command_runner.go:130] > # 	"image_pulls_success_total",
	I0805 23:46:24.818619   47941 command_runner.go:130] > # 	"image_layer_reuse_total",
	I0805 23:46:24.818627   47941 command_runner.go:130] > # 	"containers_oom_count_total",
	I0805 23:46:24.818632   47941 command_runner.go:130] > # 	"containers_seccomp_notifier_count_total",
	I0805 23:46:24.818641   47941 command_runner.go:130] > # 	"resources_stalled_at_stage",
	I0805 23:46:24.818650   47941 command_runner.go:130] > # ]
	I0805 23:46:24.818662   47941 command_runner.go:130] > # The port on which the metrics server will listen.
	I0805 23:46:24.818671   47941 command_runner.go:130] > # metrics_port = 9090
	I0805 23:46:24.818683   47941 command_runner.go:130] > # Local socket path to bind the metrics server to
	I0805 23:46:24.818692   47941 command_runner.go:130] > # metrics_socket = ""
	I0805 23:46:24.818703   47941 command_runner.go:130] > # The certificate for the secure metrics server.
	I0805 23:46:24.818714   47941 command_runner.go:130] > # If the certificate is not available on disk, then CRI-O will generate a
	I0805 23:46:24.818724   47941 command_runner.go:130] > # self-signed one. CRI-O also watches for changes of this path and reloads the
	I0805 23:46:24.818734   47941 command_runner.go:130] > # certificate on any modification event.
	I0805 23:46:24.818744   47941 command_runner.go:130] > # metrics_cert = ""
	I0805 23:46:24.818752   47941 command_runner.go:130] > # The certificate key for the secure metrics server.
	I0805 23:46:24.818764   47941 command_runner.go:130] > # Behaves in the same way as the metrics_cert.
	I0805 23:46:24.818773   47941 command_runner.go:130] > # metrics_key = ""
	I0805 23:46:24.818791   47941 command_runner.go:130] > # A necessary configuration for OpenTelemetry trace data exporting
	I0805 23:46:24.818801   47941 command_runner.go:130] > [crio.tracing]
	I0805 23:46:24.818811   47941 command_runner.go:130] > # Globally enable or disable exporting OpenTelemetry traces.
	I0805 23:46:24.818819   47941 command_runner.go:130] > # enable_tracing = false
	I0805 23:46:24.818827   47941 command_runner.go:130] > # Address on which the gRPC trace collector listens on.
	I0805 23:46:24.818838   47941 command_runner.go:130] > # tracing_endpoint = "0.0.0.0:4317"
	I0805 23:46:24.818853   47941 command_runner.go:130] > # Number of samples to collect per million spans. Set to 1000000 to always sample.
	I0805 23:46:24.818864   47941 command_runner.go:130] > # tracing_sampling_rate_per_million = 0
	I0805 23:46:24.818874   47941 command_runner.go:130] > # CRI-O NRI configuration.
	I0805 23:46:24.818883   47941 command_runner.go:130] > [crio.nri]
	I0805 23:46:24.818893   47941 command_runner.go:130] > # Globally enable or disable NRI.
	I0805 23:46:24.818900   47941 command_runner.go:130] > # enable_nri = false
	I0805 23:46:24.818904   47941 command_runner.go:130] > # NRI socket to listen on.
	I0805 23:46:24.818913   47941 command_runner.go:130] > # nri_listen = "/var/run/nri/nri.sock"
	I0805 23:46:24.818923   47941 command_runner.go:130] > # NRI plugin directory to use.
	I0805 23:46:24.818935   47941 command_runner.go:130] > # nri_plugin_dir = "/opt/nri/plugins"
	I0805 23:46:24.818945   47941 command_runner.go:130] > # NRI plugin configuration directory to use.
	I0805 23:46:24.818957   47941 command_runner.go:130] > # nri_plugin_config_dir = "/etc/nri/conf.d"
	I0805 23:46:24.818968   47941 command_runner.go:130] > # Disable connections from externally launched NRI plugins.
	I0805 23:46:24.818978   47941 command_runner.go:130] > # nri_disable_connections = false
	I0805 23:46:24.818987   47941 command_runner.go:130] > # Timeout for a plugin to register itself with NRI.
	I0805 23:46:24.818996   47941 command_runner.go:130] > # nri_plugin_registration_timeout = "5s"
	I0805 23:46:24.819008   47941 command_runner.go:130] > # Timeout for a plugin to handle an NRI request.
	I0805 23:46:24.819017   47941 command_runner.go:130] > # nri_plugin_request_timeout = "2s"
	I0805 23:46:24.819028   47941 command_runner.go:130] > # Necessary information pertaining to container and pod stats reporting.
	I0805 23:46:24.819037   47941 command_runner.go:130] > [crio.stats]
	I0805 23:46:24.819064   47941 command_runner.go:130] > # The number of seconds between collecting pod and container stats.
	I0805 23:46:24.819077   47941 command_runner.go:130] > # If set to 0, the stats are collected on-demand instead.
	I0805 23:46:24.819087   47941 command_runner.go:130] > # stats_collection_period = 0
	I0805 23:46:24.819209   47941 cni.go:84] Creating CNI manager for ""
	I0805 23:46:24.819221   47941 cni.go:136] multinode detected (3 nodes found), recommending kindnet
	I0805 23:46:24.819231   47941 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0805 23:46:24.819258   47941 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.10 APIServerPort:8443 KubernetesVersion:v1.30.3 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:multinode-342677 NodeName:multinode-342677 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.10"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.10 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/et
c/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0805 23:46:24.819422   47941 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.10
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "multinode-342677"
	  kubeletExtraArgs:
	    node-ip: 192.168.39.10
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.10"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.30.3
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0805 23:46:24.819489   47941 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.30.3
	I0805 23:46:24.830409   47941 command_runner.go:130] > kubeadm
	I0805 23:46:24.830431   47941 command_runner.go:130] > kubectl
	I0805 23:46:24.830439   47941 command_runner.go:130] > kubelet
	I0805 23:46:24.830458   47941 binaries.go:44] Found k8s binaries, skipping transfer
	I0805 23:46:24.830516   47941 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0805 23:46:24.840701   47941 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (315 bytes)
	I0805 23:46:24.858099   47941 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0805 23:46:24.875301   47941 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2157 bytes)
	I0805 23:46:24.892377   47941 ssh_runner.go:195] Run: grep 192.168.39.10	control-plane.minikube.internal$ /etc/hosts
	I0805 23:46:24.896281   47941 command_runner.go:130] > 192.168.39.10	control-plane.minikube.internal
	I0805 23:46:24.896358   47941 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0805 23:46:25.039844   47941 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0805 23:46:25.054795   47941 certs.go:68] Setting up /home/jenkins/minikube-integration/19373-9606/.minikube/profiles/multinode-342677 for IP: 192.168.39.10
	I0805 23:46:25.054825   47941 certs.go:194] generating shared ca certs ...
	I0805 23:46:25.054861   47941 certs.go:226] acquiring lock for ca certs: {Name:mkf35a042c1656d191f542eee7fa087aad4d29d8 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0805 23:46:25.055026   47941 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19373-9606/.minikube/ca.key
	I0805 23:46:25.055129   47941 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19373-9606/.minikube/proxy-client-ca.key
	I0805 23:46:25.055142   47941 certs.go:256] generating profile certs ...
	I0805 23:46:25.055227   47941 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19373-9606/.minikube/profiles/multinode-342677/client.key
	I0805 23:46:25.055280   47941 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/19373-9606/.minikube/profiles/multinode-342677/apiserver.key.35d08239
	I0805 23:46:25.055323   47941 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19373-9606/.minikube/profiles/multinode-342677/proxy-client.key
	I0805 23:46:25.055333   47941 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19373-9606/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I0805 23:46:25.055347   47941 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19373-9606/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I0805 23:46:25.055359   47941 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19373-9606/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0805 23:46:25.055371   47941 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19373-9606/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0805 23:46:25.055386   47941 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19373-9606/.minikube/profiles/multinode-342677/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0805 23:46:25.055399   47941 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19373-9606/.minikube/profiles/multinode-342677/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0805 23:46:25.055411   47941 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19373-9606/.minikube/profiles/multinode-342677/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0805 23:46:25.055423   47941 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19373-9606/.minikube/profiles/multinode-342677/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0805 23:46:25.055482   47941 certs.go:484] found cert: /home/jenkins/minikube-integration/19373-9606/.minikube/certs/16792.pem (1338 bytes)
	W0805 23:46:25.055509   47941 certs.go:480] ignoring /home/jenkins/minikube-integration/19373-9606/.minikube/certs/16792_empty.pem, impossibly tiny 0 bytes
	I0805 23:46:25.055518   47941 certs.go:484] found cert: /home/jenkins/minikube-integration/19373-9606/.minikube/certs/ca-key.pem (1679 bytes)
	I0805 23:46:25.055538   47941 certs.go:484] found cert: /home/jenkins/minikube-integration/19373-9606/.minikube/certs/ca.pem (1082 bytes)
	I0805 23:46:25.055560   47941 certs.go:484] found cert: /home/jenkins/minikube-integration/19373-9606/.minikube/certs/cert.pem (1123 bytes)
	I0805 23:46:25.055582   47941 certs.go:484] found cert: /home/jenkins/minikube-integration/19373-9606/.minikube/certs/key.pem (1679 bytes)
	I0805 23:46:25.055618   47941 certs.go:484] found cert: /home/jenkins/minikube-integration/19373-9606/.minikube/files/etc/ssl/certs/167922.pem (1708 bytes)
	I0805 23:46:25.055643   47941 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19373-9606/.minikube/certs/16792.pem -> /usr/share/ca-certificates/16792.pem
	I0805 23:46:25.055656   47941 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19373-9606/.minikube/files/etc/ssl/certs/167922.pem -> /usr/share/ca-certificates/167922.pem
	I0805 23:46:25.055668   47941 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19373-9606/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0805 23:46:25.056293   47941 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19373-9606/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0805 23:46:25.081605   47941 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19373-9606/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0805 23:46:25.105594   47941 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19373-9606/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0805 23:46:25.129306   47941 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19373-9606/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0805 23:46:25.155297   47941 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19373-9606/.minikube/profiles/multinode-342677/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I0805 23:46:25.179024   47941 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19373-9606/.minikube/profiles/multinode-342677/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0805 23:46:25.202720   47941 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19373-9606/.minikube/profiles/multinode-342677/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0805 23:46:25.226906   47941 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19373-9606/.minikube/profiles/multinode-342677/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0805 23:46:25.251500   47941 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19373-9606/.minikube/certs/16792.pem --> /usr/share/ca-certificates/16792.pem (1338 bytes)
	I0805 23:46:25.275954   47941 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19373-9606/.minikube/files/etc/ssl/certs/167922.pem --> /usr/share/ca-certificates/167922.pem (1708 bytes)
	I0805 23:46:25.300838   47941 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19373-9606/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0805 23:46:25.326582   47941 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0805 23:46:25.344163   47941 ssh_runner.go:195] Run: openssl version
	I0805 23:46:25.349800   47941 command_runner.go:130] > OpenSSL 1.1.1w  11 Sep 2023
	I0805 23:46:25.349999   47941 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/16792.pem && ln -fs /usr/share/ca-certificates/16792.pem /etc/ssl/certs/16792.pem"
	I0805 23:46:25.362145   47941 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/16792.pem
	I0805 23:46:25.366828   47941 command_runner.go:130] > -rw-r--r-- 1 root root 1338 Aug  5 23:03 /usr/share/ca-certificates/16792.pem
	I0805 23:46:25.366860   47941 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Aug  5 23:03 /usr/share/ca-certificates/16792.pem
	I0805 23:46:25.366918   47941 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/16792.pem
	I0805 23:46:25.372682   47941 command_runner.go:130] > 51391683
	I0805 23:46:25.372754   47941 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/16792.pem /etc/ssl/certs/51391683.0"
	I0805 23:46:25.382568   47941 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/167922.pem && ln -fs /usr/share/ca-certificates/167922.pem /etc/ssl/certs/167922.pem"
	I0805 23:46:25.393372   47941 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/167922.pem
	I0805 23:46:25.397795   47941 command_runner.go:130] > -rw-r--r-- 1 root root 1708 Aug  5 23:03 /usr/share/ca-certificates/167922.pem
	I0805 23:46:25.397964   47941 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Aug  5 23:03 /usr/share/ca-certificates/167922.pem
	I0805 23:46:25.398009   47941 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/167922.pem
	I0805 23:46:25.403461   47941 command_runner.go:130] > 3ec20f2e
	I0805 23:46:25.403517   47941 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/167922.pem /etc/ssl/certs/3ec20f2e.0"
	I0805 23:46:25.412918   47941 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0805 23:46:25.425122   47941 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0805 23:46:25.429785   47941 command_runner.go:130] > -rw-r--r-- 1 root root 1111 Aug  5 22:50 /usr/share/ca-certificates/minikubeCA.pem
	I0805 23:46:25.429937   47941 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Aug  5 22:50 /usr/share/ca-certificates/minikubeCA.pem
	I0805 23:46:25.429984   47941 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0805 23:46:25.435580   47941 command_runner.go:130] > b5213941
	I0805 23:46:25.435643   47941 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0805 23:46:25.445301   47941 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0805 23:46:25.449790   47941 command_runner.go:130] >   File: /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0805 23:46:25.449817   47941 command_runner.go:130] >   Size: 1176      	Blocks: 8          IO Block: 4096   regular file
	I0805 23:46:25.449825   47941 command_runner.go:130] > Device: 253,1	Inode: 4197931     Links: 1
	I0805 23:46:25.449834   47941 command_runner.go:130] > Access: (0644/-rw-r--r--)  Uid: (    0/    root)   Gid: (    0/    root)
	I0805 23:46:25.449842   47941 command_runner.go:130] > Access: 2024-08-05 23:39:24.040104079 +0000
	I0805 23:46:25.449850   47941 command_runner.go:130] > Modify: 2024-08-05 23:39:24.040104079 +0000
	I0805 23:46:25.449857   47941 command_runner.go:130] > Change: 2024-08-05 23:39:24.040104079 +0000
	I0805 23:46:25.449867   47941 command_runner.go:130] >  Birth: 2024-08-05 23:39:24.040104079 +0000
	I0805 23:46:25.449954   47941 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0805 23:46:25.455723   47941 command_runner.go:130] > Certificate will not expire
	I0805 23:46:25.455957   47941 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0805 23:46:25.461732   47941 command_runner.go:130] > Certificate will not expire
	I0805 23:46:25.461842   47941 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0805 23:46:25.467480   47941 command_runner.go:130] > Certificate will not expire
	I0805 23:46:25.467552   47941 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0805 23:46:25.473223   47941 command_runner.go:130] > Certificate will not expire
	I0805 23:46:25.473271   47941 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0805 23:46:25.478777   47941 command_runner.go:130] > Certificate will not expire
	I0805 23:46:25.478849   47941 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0805 23:46:25.484327   47941 command_runner.go:130] > Certificate will not expire
	I0805 23:46:25.484386   47941 kubeadm.go:392] StartCluster: {Name:multinode-342677 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19339/minikube-v1.33.1-1722248113-19339-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.
3 ClusterName:multinode-342677 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.10 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.89 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:false Worker:true} {Name:m03 IP:192.168.39.75 Port:0 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false in
spektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOpt
imizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0805 23:46:25.484496   47941 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0805 23:46:25.484543   47941 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0805 23:46:25.523255   47941 command_runner.go:130] > 00b5601d99857e510793bf69888cd3fe706ae1919ea2fa58cd1e49e2cc8fe8f3
	I0805 23:46:25.523287   47941 command_runner.go:130] > c8b4c139ba9f34045731ac7ff528df9d7aebaf6e5e682eeac1c47b5710313379
	I0805 23:46:25.523297   47941 command_runner.go:130] > 150ce9e294a897bb1eee154f726f0956df1618247219dcd049722c011dbe331e
	I0805 23:46:25.523308   47941 command_runner.go:130] > 3b1d0ef18e29d3787609be51f754e7f2324ee16d19d999762bac401d079a7fd2
	I0805 23:46:25.523317   47941 command_runner.go:130] > f227e8cf03b66f737d02e2c7b817576ad72901aa61a0e63d337fb36ec9c32943
	I0805 23:46:25.523325   47941 command_runner.go:130] > 5cc7242052f30bef2f21e600e245b76900de63c25a681c55c467489b4bb4cad9
	I0805 23:46:25.523334   47941 command_runner.go:130] > 30e13b94e51e4836e65d865d70745d086a906658385b8b067fe0d8e69095705e
	I0805 23:46:25.523344   47941 command_runner.go:130] > 9d3772211d8011c9a6554ddc5569f3920bbe3050b56a031062e0557cf43be0e2
	I0805 23:46:25.523375   47941 cri.go:89] found id: "00b5601d99857e510793bf69888cd3fe706ae1919ea2fa58cd1e49e2cc8fe8f3"
	I0805 23:46:25.523387   47941 cri.go:89] found id: "c8b4c139ba9f34045731ac7ff528df9d7aebaf6e5e682eeac1c47b5710313379"
	I0805 23:46:25.523393   47941 cri.go:89] found id: "150ce9e294a897bb1eee154f726f0956df1618247219dcd049722c011dbe331e"
	I0805 23:46:25.523398   47941 cri.go:89] found id: "3b1d0ef18e29d3787609be51f754e7f2324ee16d19d999762bac401d079a7fd2"
	I0805 23:46:25.523402   47941 cri.go:89] found id: "f227e8cf03b66f737d02e2c7b817576ad72901aa61a0e63d337fb36ec9c32943"
	I0805 23:46:25.523406   47941 cri.go:89] found id: "5cc7242052f30bef2f21e600e245b76900de63c25a681c55c467489b4bb4cad9"
	I0805 23:46:25.523409   47941 cri.go:89] found id: "30e13b94e51e4836e65d865d70745d086a906658385b8b067fe0d8e69095705e"
	I0805 23:46:25.523413   47941 cri.go:89] found id: "9d3772211d8011c9a6554ddc5569f3920bbe3050b56a031062e0557cf43be0e2"
	I0805 23:46:25.523415   47941 cri.go:89] found id: ""
	I0805 23:46:25.523455   47941 ssh_runner.go:195] Run: sudo runc list -f json
	
	
	==> CRI-O <==
	Aug 05 23:48:15 multinode-342677 crio[2886]: time="2024-08-05 23:48:15.009017745Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1722901695008991252,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:143053,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=30889dba-788e-427a-a1da-2646700ddece name=/runtime.v1.ImageService/ImageFsInfo
	Aug 05 23:48:15 multinode-342677 crio[2886]: time="2024-08-05 23:48:15.009723160Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=b03765d8-b175-4906-8054-c0916b4d200a name=/runtime.v1.RuntimeService/ListContainers
	Aug 05 23:48:15 multinode-342677 crio[2886]: time="2024-08-05 23:48:15.009781833Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=b03765d8-b175-4906-8054-c0916b4d200a name=/runtime.v1.RuntimeService/ListContainers
	Aug 05 23:48:15 multinode-342677 crio[2886]: time="2024-08-05 23:48:15.010150286Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:a0e5a6b7ec4cef49b6d8fde0b12859c53f333ffac6eb59a728ac65e9274ba3bf,PodSandboxId:9c058b71634ff24060e0e8b0c1b24a92c2863e78046e35171cf27bb43980ef81,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1722901625795332746,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-fc5497c4f-78mt7,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 2761ea7e-d8a2-40d3-bd8d-a2e484b0bec3,},Annotations:map[string]string{io.kubernetes.container.hash: 5d980674,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessage
Path: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b62f62a176c70a254e48450706f9b7524e717202076210725309a1e6c28138bc,PodSandboxId:ee8ad4fa44950fa02aa006da747e47186dc3d9aa497736cc81b26273582092da,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:1,},Image:&ImageSpec{Image:917d7814b9b5b870a04ef7bbee5414485a7ba8385be9c26e1df27463f6fb6557,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:917d7814b9b5b870a04ef7bbee5414485a7ba8385be9c26e1df27463f6fb6557,State:CONTAINER_RUNNING,CreatedAt:1722901592221740056,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-6c596,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a8a66d1c-c60f-4a75-8104-151faf7922b9,},Annotations:map[string]string{io.kubernetes.container.hash: 4ff057c6,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kube
rnetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6797a9b46983671da1ba766fb34f5be198a6f9d393ac2f171339c0def77c28e1,PodSandboxId:50b50402089f5ef893f2e1020443ac3740204eafd51c0bb3b0c1a95387d4a4f4,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1722901592169187259,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-v42dl,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f82457c8-44fc-476d-828b-ac33899c132b,},Annotations:map[string]string{io.kubernetes.container.hash: 66c772fe,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\
":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3b79ca8015a145db755007359177d373f8fb63ee8d261e67f64838e7af497133,PodSandboxId:f794a28dee848aa9a6e5f529ff4b0ac13bcdc20f7efc577d10f83af4d0a7f96e,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,State:CONTAINER_RUNNING,CreatedAt:1722901591997430400,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-2dnzb,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ddda1087-36af-4e82-88d3-54e6348c5e22,},Annotations:map[string]
string{io.kubernetes.container.hash: 10979fcd,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a5a92de8354098f5dc94e8aa94bb5d5aad51d11e8a6025988fd20a80568eee49,PodSandboxId:b37ed3d64fd18e7e93f99b002137d4d76775a81143bf44397c5edabd2b86fcc0,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1722901591963793102,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 71064ea8-4354-4f74-9efc-52487675def4,},Annotations:map[string]string{io.ku
bernetes.container.hash: f7b2680,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:943c42b387fc724e757ea4b76361e6a758b577b524c7c10390b65369cea51422,PodSandboxId:054f8fceeb7cdcfd41e776e32bd55b59140891b42d00369b60b9aaad1a58465c,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,State:CONTAINER_RUNNING,CreatedAt:1722901588232266700,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-multinode-342677,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2f1a0b5192f07729588fefe71f91e855,},Annotations:map[string]string{io.kubernetes.contai
ner.hash: 7337c8d9,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:268fef5c96aefcf57fc17aa09c4ebf2c737c37b5bdc83fe67a396bfa1b804384,PodSandboxId:365c7bc7a2b1ca350880d983794b4c6feb8597b97f66dfc3b8a048ac9720136c,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1722901588210215125,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-multinode-342677,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: fe9d35cc8086dee57d5df88c8a99e7d8,},Annotations:map[string]string{io.kubernetes.container.hash: f19e9654,io.kubernetes.container.r
estartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:45dfe807a437e2efa653406b8d23df5726dff92deecdb42360742ab37c64c201,PodSandboxId:ac8cb2ec338da417c69fee22ef95616b800021fb34ffc5c70712e3fcbf35a0d0,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,State:CONTAINER_RUNNING,CreatedAt:1722901588202172871,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-multinode-342677,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 20dbf724a8b080d422a73b396072a4c7,},Annotations:map[string]string{io.kubernetes.container.hash: bcb4918f,io.kubernete
s.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:bdaf3015949ce621acf67c07735918381578c9af19ebd3e5221f87a4cd2af079,PodSandboxId:3b873b27fc9d25b6c540d99c09926b8080f05d4f2343f9b81ce0b3b945380ea9,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,State:CONTAINER_RUNNING,CreatedAt:1722901588169836346,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-multinode-342677,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: efe0bdf356940b826a4ba3b020e6529c,},Annotations:map[string]string{io.kubernetes.container.hash: d82bc00b,io.kubernetes.container.re
startCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4ca4e362daa3cf637429cc280868001a87ead2a1c6b86c42ca8880864eb2b33b,PodSandboxId:cbe52e52d544551a4eaeb48f07902ba0252b2e562e0b7426cc66a20762a4a053,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_EXITED,CreatedAt:1722901260953256689,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-fc5497c4f-78mt7,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 2761ea7e-d8a2-40d3-bd8d-a2e484b0bec3,},Annotations:map[string]string{io.kubernetes.container.hash: 5d980674,io.kubernetes.container.rest
artCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:00b5601d99857e510793bf69888cd3fe706ae1919ea2fa58cd1e49e2cc8fe8f3,PodSandboxId:3d9e1ffaf8822a12d115003c1883d1817957a0bcb4e4d516649e4b91ab06ba3c,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1722901206217934742,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-v42dl,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f82457c8-44fc-476d-828b-ac33899c132b,},Annotations:map[string]string{io.kubernetes.container.hash: 66c772fe,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containe
rPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c8b4c139ba9f34045731ac7ff528df9d7aebaf6e5e682eeac1c47b5710313379,PodSandboxId:a8259144d8379242d353a86d5adec712cc26b0a08e440fabe5668e9603e2a7e1,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1722901204667548032,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.n
amespace: kube-system,io.kubernetes.pod.uid: 71064ea8-4354-4f74-9efc-52487675def4,},Annotations:map[string]string{io.kubernetes.container.hash: f7b2680,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:150ce9e294a897bb1eee154f726f0956df1618247219dcd049722c011dbe331e,PodSandboxId:0e5f1c11948a79d3e2c7d6179de4c73e196695f95a60f9892b69f6ec45c16d38,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:docker.io/kindest/kindnetd@sha256:4067b91686869e19bac601aec305ba55d2e74cdcb91347869bfb4fd3a26cd3c3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:917d7814b9b5b870a04ef7bbee5414485a7ba8385be9c26e1df27463f6fb6557,State:CONTAINER_EXITED,CreatedAt:1722901192992033098,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-6c596,io.kubernetes.pod.nam
espace: kube-system,io.kubernetes.pod.uid: a8a66d1c-c60f-4a75-8104-151faf7922b9,},Annotations:map[string]string{io.kubernetes.container.hash: 4ff057c6,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3b1d0ef18e29d3787609be51f754e7f2324ee16d19d999762bac401d079a7fd2,PodSandboxId:353f772917eac257829637e206a259eb1f44afeea71efacfab5ce5d5af8892b0,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,State:CONTAINER_EXITED,CreatedAt:1722901189044346374,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-2dnzb,io.kubernetes.pod.namespace: kube-system,io.kubernetes.
pod.uid: ddda1087-36af-4e82-88d3-54e6348c5e22,},Annotations:map[string]string{io.kubernetes.container.hash: 10979fcd,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:30e13b94e51e4836e65d865d70745d086a906658385b8b067fe0d8e69095705e,PodSandboxId:18c7ba91ecfa22ab34982fbb76f08587f43a0f966129ab03ec78113f3a756e1c,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_EXITED,CreatedAt:1722901168318903940,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-multinode-342677,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: fe9d35cc8086dee57d5df88c8a99e7d8
,},Annotations:map[string]string{io.kubernetes.container.hash: f19e9654,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f227e8cf03b66f737d02e2c7b817576ad72901aa61a0e63d337fb36ec9c32943,PodSandboxId:08f166f93ecd6c382e72917c7c4f41f7606ffe9bf055ba92daa336772820b451,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,State:CONTAINER_EXITED,CreatedAt:1722901168373329147,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-multinode-342677,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: efe0bdf356940b826a4ba3b020e6529c,},Annotations:
map[string]string{io.kubernetes.container.hash: d82bc00b,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5cc7242052f30bef2f21e600e245b76900de63c25a681c55c467489b4bb4cad9,PodSandboxId:1cfcc3af05ebb8e14183a8bcbdba732a57b181af412a613bd2bbd2579cebbef4,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,State:CONTAINER_EXITED,CreatedAt:1722901168327637273,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-multinode-342677,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2f1a0b5192f07729588fefe71f91e855,},Annotations:map[string]stri
ng{io.kubernetes.container.hash: 7337c8d9,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9d3772211d8011c9a6554ddc5569f3920bbe3050b56a031062e0557cf43be0e2,PodSandboxId:206121b53ee872114e8fe65e58499c97a566a2df9439ac8dd4a51eaa92a99fa1,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,State:CONTAINER_EXITED,CreatedAt:1722901168281340148,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-multinode-342677,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 20dbf724a8b080d422a73b396072a4c7,},Annotations:map
[string]string{io.kubernetes.container.hash: bcb4918f,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=b03765d8-b175-4906-8054-c0916b4d200a name=/runtime.v1.RuntimeService/ListContainers
	Aug 05 23:48:15 multinode-342677 crio[2886]: time="2024-08-05 23:48:15.056210043Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=66e9a6ad-d41a-4d9e-99ff-2e64c2e6fc72 name=/runtime.v1.RuntimeService/Version
	Aug 05 23:48:15 multinode-342677 crio[2886]: time="2024-08-05 23:48:15.056284833Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=66e9a6ad-d41a-4d9e-99ff-2e64c2e6fc72 name=/runtime.v1.RuntimeService/Version
	Aug 05 23:48:15 multinode-342677 crio[2886]: time="2024-08-05 23:48:15.057602401Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=752c39d5-3487-4212-8294-002fd3d80f60 name=/runtime.v1.ImageService/ImageFsInfo
	Aug 05 23:48:15 multinode-342677 crio[2886]: time="2024-08-05 23:48:15.058221191Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1722901695058195444,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:143053,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=752c39d5-3487-4212-8294-002fd3d80f60 name=/runtime.v1.ImageService/ImageFsInfo
	Aug 05 23:48:15 multinode-342677 crio[2886]: time="2024-08-05 23:48:15.059024581Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=5a63f903-d9cb-449d-b21e-1ee3b14e8b25 name=/runtime.v1.RuntimeService/ListContainers
	Aug 05 23:48:15 multinode-342677 crio[2886]: time="2024-08-05 23:48:15.059081747Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=5a63f903-d9cb-449d-b21e-1ee3b14e8b25 name=/runtime.v1.RuntimeService/ListContainers
	Aug 05 23:48:15 multinode-342677 crio[2886]: time="2024-08-05 23:48:15.059455302Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:a0e5a6b7ec4cef49b6d8fde0b12859c53f333ffac6eb59a728ac65e9274ba3bf,PodSandboxId:9c058b71634ff24060e0e8b0c1b24a92c2863e78046e35171cf27bb43980ef81,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1722901625795332746,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-fc5497c4f-78mt7,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 2761ea7e-d8a2-40d3-bd8d-a2e484b0bec3,},Annotations:map[string]string{io.kubernetes.container.hash: 5d980674,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessage
Path: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b62f62a176c70a254e48450706f9b7524e717202076210725309a1e6c28138bc,PodSandboxId:ee8ad4fa44950fa02aa006da747e47186dc3d9aa497736cc81b26273582092da,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:1,},Image:&ImageSpec{Image:917d7814b9b5b870a04ef7bbee5414485a7ba8385be9c26e1df27463f6fb6557,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:917d7814b9b5b870a04ef7bbee5414485a7ba8385be9c26e1df27463f6fb6557,State:CONTAINER_RUNNING,CreatedAt:1722901592221740056,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-6c596,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a8a66d1c-c60f-4a75-8104-151faf7922b9,},Annotations:map[string]string{io.kubernetes.container.hash: 4ff057c6,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kube
rnetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6797a9b46983671da1ba766fb34f5be198a6f9d393ac2f171339c0def77c28e1,PodSandboxId:50b50402089f5ef893f2e1020443ac3740204eafd51c0bb3b0c1a95387d4a4f4,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1722901592169187259,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-v42dl,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f82457c8-44fc-476d-828b-ac33899c132b,},Annotations:map[string]string{io.kubernetes.container.hash: 66c772fe,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\
":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3b79ca8015a145db755007359177d373f8fb63ee8d261e67f64838e7af497133,PodSandboxId:f794a28dee848aa9a6e5f529ff4b0ac13bcdc20f7efc577d10f83af4d0a7f96e,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,State:CONTAINER_RUNNING,CreatedAt:1722901591997430400,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-2dnzb,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ddda1087-36af-4e82-88d3-54e6348c5e22,},Annotations:map[string]
string{io.kubernetes.container.hash: 10979fcd,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a5a92de8354098f5dc94e8aa94bb5d5aad51d11e8a6025988fd20a80568eee49,PodSandboxId:b37ed3d64fd18e7e93f99b002137d4d76775a81143bf44397c5edabd2b86fcc0,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1722901591963793102,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 71064ea8-4354-4f74-9efc-52487675def4,},Annotations:map[string]string{io.ku
bernetes.container.hash: f7b2680,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:943c42b387fc724e757ea4b76361e6a758b577b524c7c10390b65369cea51422,PodSandboxId:054f8fceeb7cdcfd41e776e32bd55b59140891b42d00369b60b9aaad1a58465c,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,State:CONTAINER_RUNNING,CreatedAt:1722901588232266700,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-multinode-342677,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2f1a0b5192f07729588fefe71f91e855,},Annotations:map[string]string{io.kubernetes.contai
ner.hash: 7337c8d9,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:268fef5c96aefcf57fc17aa09c4ebf2c737c37b5bdc83fe67a396bfa1b804384,PodSandboxId:365c7bc7a2b1ca350880d983794b4c6feb8597b97f66dfc3b8a048ac9720136c,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1722901588210215125,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-multinode-342677,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: fe9d35cc8086dee57d5df88c8a99e7d8,},Annotations:map[string]string{io.kubernetes.container.hash: f19e9654,io.kubernetes.container.r
estartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:45dfe807a437e2efa653406b8d23df5726dff92deecdb42360742ab37c64c201,PodSandboxId:ac8cb2ec338da417c69fee22ef95616b800021fb34ffc5c70712e3fcbf35a0d0,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,State:CONTAINER_RUNNING,CreatedAt:1722901588202172871,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-multinode-342677,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 20dbf724a8b080d422a73b396072a4c7,},Annotations:map[string]string{io.kubernetes.container.hash: bcb4918f,io.kubernete
s.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:bdaf3015949ce621acf67c07735918381578c9af19ebd3e5221f87a4cd2af079,PodSandboxId:3b873b27fc9d25b6c540d99c09926b8080f05d4f2343f9b81ce0b3b945380ea9,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,State:CONTAINER_RUNNING,CreatedAt:1722901588169836346,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-multinode-342677,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: efe0bdf356940b826a4ba3b020e6529c,},Annotations:map[string]string{io.kubernetes.container.hash: d82bc00b,io.kubernetes.container.re
startCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4ca4e362daa3cf637429cc280868001a87ead2a1c6b86c42ca8880864eb2b33b,PodSandboxId:cbe52e52d544551a4eaeb48f07902ba0252b2e562e0b7426cc66a20762a4a053,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_EXITED,CreatedAt:1722901260953256689,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-fc5497c4f-78mt7,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 2761ea7e-d8a2-40d3-bd8d-a2e484b0bec3,},Annotations:map[string]string{io.kubernetes.container.hash: 5d980674,io.kubernetes.container.rest
artCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:00b5601d99857e510793bf69888cd3fe706ae1919ea2fa58cd1e49e2cc8fe8f3,PodSandboxId:3d9e1ffaf8822a12d115003c1883d1817957a0bcb4e4d516649e4b91ab06ba3c,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1722901206217934742,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-v42dl,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f82457c8-44fc-476d-828b-ac33899c132b,},Annotations:map[string]string{io.kubernetes.container.hash: 66c772fe,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containe
rPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c8b4c139ba9f34045731ac7ff528df9d7aebaf6e5e682eeac1c47b5710313379,PodSandboxId:a8259144d8379242d353a86d5adec712cc26b0a08e440fabe5668e9603e2a7e1,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1722901204667548032,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.n
amespace: kube-system,io.kubernetes.pod.uid: 71064ea8-4354-4f74-9efc-52487675def4,},Annotations:map[string]string{io.kubernetes.container.hash: f7b2680,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:150ce9e294a897bb1eee154f726f0956df1618247219dcd049722c011dbe331e,PodSandboxId:0e5f1c11948a79d3e2c7d6179de4c73e196695f95a60f9892b69f6ec45c16d38,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:docker.io/kindest/kindnetd@sha256:4067b91686869e19bac601aec305ba55d2e74cdcb91347869bfb4fd3a26cd3c3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:917d7814b9b5b870a04ef7bbee5414485a7ba8385be9c26e1df27463f6fb6557,State:CONTAINER_EXITED,CreatedAt:1722901192992033098,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-6c596,io.kubernetes.pod.nam
espace: kube-system,io.kubernetes.pod.uid: a8a66d1c-c60f-4a75-8104-151faf7922b9,},Annotations:map[string]string{io.kubernetes.container.hash: 4ff057c6,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3b1d0ef18e29d3787609be51f754e7f2324ee16d19d999762bac401d079a7fd2,PodSandboxId:353f772917eac257829637e206a259eb1f44afeea71efacfab5ce5d5af8892b0,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,State:CONTAINER_EXITED,CreatedAt:1722901189044346374,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-2dnzb,io.kubernetes.pod.namespace: kube-system,io.kubernetes.
pod.uid: ddda1087-36af-4e82-88d3-54e6348c5e22,},Annotations:map[string]string{io.kubernetes.container.hash: 10979fcd,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:30e13b94e51e4836e65d865d70745d086a906658385b8b067fe0d8e69095705e,PodSandboxId:18c7ba91ecfa22ab34982fbb76f08587f43a0f966129ab03ec78113f3a756e1c,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_EXITED,CreatedAt:1722901168318903940,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-multinode-342677,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: fe9d35cc8086dee57d5df88c8a99e7d8
,},Annotations:map[string]string{io.kubernetes.container.hash: f19e9654,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f227e8cf03b66f737d02e2c7b817576ad72901aa61a0e63d337fb36ec9c32943,PodSandboxId:08f166f93ecd6c382e72917c7c4f41f7606ffe9bf055ba92daa336772820b451,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,State:CONTAINER_EXITED,CreatedAt:1722901168373329147,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-multinode-342677,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: efe0bdf356940b826a4ba3b020e6529c,},Annotations:
map[string]string{io.kubernetes.container.hash: d82bc00b,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5cc7242052f30bef2f21e600e245b76900de63c25a681c55c467489b4bb4cad9,PodSandboxId:1cfcc3af05ebb8e14183a8bcbdba732a57b181af412a613bd2bbd2579cebbef4,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,State:CONTAINER_EXITED,CreatedAt:1722901168327637273,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-multinode-342677,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2f1a0b5192f07729588fefe71f91e855,},Annotations:map[string]stri
ng{io.kubernetes.container.hash: 7337c8d9,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9d3772211d8011c9a6554ddc5569f3920bbe3050b56a031062e0557cf43be0e2,PodSandboxId:206121b53ee872114e8fe65e58499c97a566a2df9439ac8dd4a51eaa92a99fa1,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,State:CONTAINER_EXITED,CreatedAt:1722901168281340148,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-multinode-342677,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 20dbf724a8b080d422a73b396072a4c7,},Annotations:map
[string]string{io.kubernetes.container.hash: bcb4918f,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=5a63f903-d9cb-449d-b21e-1ee3b14e8b25 name=/runtime.v1.RuntimeService/ListContainers
	Aug 05 23:48:15 multinode-342677 crio[2886]: time="2024-08-05 23:48:15.106771855Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=80779bbd-1afe-4fca-9ef5-bb6f2fc9f5f3 name=/runtime.v1.RuntimeService/Version
	Aug 05 23:48:15 multinode-342677 crio[2886]: time="2024-08-05 23:48:15.106880558Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=80779bbd-1afe-4fca-9ef5-bb6f2fc9f5f3 name=/runtime.v1.RuntimeService/Version
	Aug 05 23:48:15 multinode-342677 crio[2886]: time="2024-08-05 23:48:15.108199812Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=7c3df570-f9fe-4653-a761-5bdc6d96178e name=/runtime.v1.ImageService/ImageFsInfo
	Aug 05 23:48:15 multinode-342677 crio[2886]: time="2024-08-05 23:48:15.108612597Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1722901695108590331,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:143053,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=7c3df570-f9fe-4653-a761-5bdc6d96178e name=/runtime.v1.ImageService/ImageFsInfo
	Aug 05 23:48:15 multinode-342677 crio[2886]: time="2024-08-05 23:48:15.109373153Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=3d026b87-e793-4d5b-a9ad-c5aad2979df7 name=/runtime.v1.RuntimeService/ListContainers
	Aug 05 23:48:15 multinode-342677 crio[2886]: time="2024-08-05 23:48:15.109433599Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=3d026b87-e793-4d5b-a9ad-c5aad2979df7 name=/runtime.v1.RuntimeService/ListContainers
	Aug 05 23:48:15 multinode-342677 crio[2886]: time="2024-08-05 23:48:15.109842474Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:a0e5a6b7ec4cef49b6d8fde0b12859c53f333ffac6eb59a728ac65e9274ba3bf,PodSandboxId:9c058b71634ff24060e0e8b0c1b24a92c2863e78046e35171cf27bb43980ef81,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1722901625795332746,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-fc5497c4f-78mt7,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 2761ea7e-d8a2-40d3-bd8d-a2e484b0bec3,},Annotations:map[string]string{io.kubernetes.container.hash: 5d980674,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessage
Path: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b62f62a176c70a254e48450706f9b7524e717202076210725309a1e6c28138bc,PodSandboxId:ee8ad4fa44950fa02aa006da747e47186dc3d9aa497736cc81b26273582092da,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:1,},Image:&ImageSpec{Image:917d7814b9b5b870a04ef7bbee5414485a7ba8385be9c26e1df27463f6fb6557,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:917d7814b9b5b870a04ef7bbee5414485a7ba8385be9c26e1df27463f6fb6557,State:CONTAINER_RUNNING,CreatedAt:1722901592221740056,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-6c596,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a8a66d1c-c60f-4a75-8104-151faf7922b9,},Annotations:map[string]string{io.kubernetes.container.hash: 4ff057c6,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kube
rnetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6797a9b46983671da1ba766fb34f5be198a6f9d393ac2f171339c0def77c28e1,PodSandboxId:50b50402089f5ef893f2e1020443ac3740204eafd51c0bb3b0c1a95387d4a4f4,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1722901592169187259,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-v42dl,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f82457c8-44fc-476d-828b-ac33899c132b,},Annotations:map[string]string{io.kubernetes.container.hash: 66c772fe,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\
":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3b79ca8015a145db755007359177d373f8fb63ee8d261e67f64838e7af497133,PodSandboxId:f794a28dee848aa9a6e5f529ff4b0ac13bcdc20f7efc577d10f83af4d0a7f96e,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,State:CONTAINER_RUNNING,CreatedAt:1722901591997430400,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-2dnzb,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ddda1087-36af-4e82-88d3-54e6348c5e22,},Annotations:map[string]
string{io.kubernetes.container.hash: 10979fcd,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a5a92de8354098f5dc94e8aa94bb5d5aad51d11e8a6025988fd20a80568eee49,PodSandboxId:b37ed3d64fd18e7e93f99b002137d4d76775a81143bf44397c5edabd2b86fcc0,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1722901591963793102,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 71064ea8-4354-4f74-9efc-52487675def4,},Annotations:map[string]string{io.ku
bernetes.container.hash: f7b2680,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:943c42b387fc724e757ea4b76361e6a758b577b524c7c10390b65369cea51422,PodSandboxId:054f8fceeb7cdcfd41e776e32bd55b59140891b42d00369b60b9aaad1a58465c,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,State:CONTAINER_RUNNING,CreatedAt:1722901588232266700,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-multinode-342677,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2f1a0b5192f07729588fefe71f91e855,},Annotations:map[string]string{io.kubernetes.contai
ner.hash: 7337c8d9,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:268fef5c96aefcf57fc17aa09c4ebf2c737c37b5bdc83fe67a396bfa1b804384,PodSandboxId:365c7bc7a2b1ca350880d983794b4c6feb8597b97f66dfc3b8a048ac9720136c,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1722901588210215125,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-multinode-342677,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: fe9d35cc8086dee57d5df88c8a99e7d8,},Annotations:map[string]string{io.kubernetes.container.hash: f19e9654,io.kubernetes.container.r
estartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:45dfe807a437e2efa653406b8d23df5726dff92deecdb42360742ab37c64c201,PodSandboxId:ac8cb2ec338da417c69fee22ef95616b800021fb34ffc5c70712e3fcbf35a0d0,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,State:CONTAINER_RUNNING,CreatedAt:1722901588202172871,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-multinode-342677,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 20dbf724a8b080d422a73b396072a4c7,},Annotations:map[string]string{io.kubernetes.container.hash: bcb4918f,io.kubernete
s.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:bdaf3015949ce621acf67c07735918381578c9af19ebd3e5221f87a4cd2af079,PodSandboxId:3b873b27fc9d25b6c540d99c09926b8080f05d4f2343f9b81ce0b3b945380ea9,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,State:CONTAINER_RUNNING,CreatedAt:1722901588169836346,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-multinode-342677,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: efe0bdf356940b826a4ba3b020e6529c,},Annotations:map[string]string{io.kubernetes.container.hash: d82bc00b,io.kubernetes.container.re
startCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4ca4e362daa3cf637429cc280868001a87ead2a1c6b86c42ca8880864eb2b33b,PodSandboxId:cbe52e52d544551a4eaeb48f07902ba0252b2e562e0b7426cc66a20762a4a053,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_EXITED,CreatedAt:1722901260953256689,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-fc5497c4f-78mt7,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 2761ea7e-d8a2-40d3-bd8d-a2e484b0bec3,},Annotations:map[string]string{io.kubernetes.container.hash: 5d980674,io.kubernetes.container.rest
artCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:00b5601d99857e510793bf69888cd3fe706ae1919ea2fa58cd1e49e2cc8fe8f3,PodSandboxId:3d9e1ffaf8822a12d115003c1883d1817957a0bcb4e4d516649e4b91ab06ba3c,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1722901206217934742,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-v42dl,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f82457c8-44fc-476d-828b-ac33899c132b,},Annotations:map[string]string{io.kubernetes.container.hash: 66c772fe,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containe
rPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c8b4c139ba9f34045731ac7ff528df9d7aebaf6e5e682eeac1c47b5710313379,PodSandboxId:a8259144d8379242d353a86d5adec712cc26b0a08e440fabe5668e9603e2a7e1,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1722901204667548032,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.n
amespace: kube-system,io.kubernetes.pod.uid: 71064ea8-4354-4f74-9efc-52487675def4,},Annotations:map[string]string{io.kubernetes.container.hash: f7b2680,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:150ce9e294a897bb1eee154f726f0956df1618247219dcd049722c011dbe331e,PodSandboxId:0e5f1c11948a79d3e2c7d6179de4c73e196695f95a60f9892b69f6ec45c16d38,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:docker.io/kindest/kindnetd@sha256:4067b91686869e19bac601aec305ba55d2e74cdcb91347869bfb4fd3a26cd3c3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:917d7814b9b5b870a04ef7bbee5414485a7ba8385be9c26e1df27463f6fb6557,State:CONTAINER_EXITED,CreatedAt:1722901192992033098,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-6c596,io.kubernetes.pod.nam
espace: kube-system,io.kubernetes.pod.uid: a8a66d1c-c60f-4a75-8104-151faf7922b9,},Annotations:map[string]string{io.kubernetes.container.hash: 4ff057c6,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3b1d0ef18e29d3787609be51f754e7f2324ee16d19d999762bac401d079a7fd2,PodSandboxId:353f772917eac257829637e206a259eb1f44afeea71efacfab5ce5d5af8892b0,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,State:CONTAINER_EXITED,CreatedAt:1722901189044346374,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-2dnzb,io.kubernetes.pod.namespace: kube-system,io.kubernetes.
pod.uid: ddda1087-36af-4e82-88d3-54e6348c5e22,},Annotations:map[string]string{io.kubernetes.container.hash: 10979fcd,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:30e13b94e51e4836e65d865d70745d086a906658385b8b067fe0d8e69095705e,PodSandboxId:18c7ba91ecfa22ab34982fbb76f08587f43a0f966129ab03ec78113f3a756e1c,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_EXITED,CreatedAt:1722901168318903940,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-multinode-342677,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: fe9d35cc8086dee57d5df88c8a99e7d8
,},Annotations:map[string]string{io.kubernetes.container.hash: f19e9654,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f227e8cf03b66f737d02e2c7b817576ad72901aa61a0e63d337fb36ec9c32943,PodSandboxId:08f166f93ecd6c382e72917c7c4f41f7606ffe9bf055ba92daa336772820b451,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,State:CONTAINER_EXITED,CreatedAt:1722901168373329147,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-multinode-342677,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: efe0bdf356940b826a4ba3b020e6529c,},Annotations:
map[string]string{io.kubernetes.container.hash: d82bc00b,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5cc7242052f30bef2f21e600e245b76900de63c25a681c55c467489b4bb4cad9,PodSandboxId:1cfcc3af05ebb8e14183a8bcbdba732a57b181af412a613bd2bbd2579cebbef4,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,State:CONTAINER_EXITED,CreatedAt:1722901168327637273,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-multinode-342677,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2f1a0b5192f07729588fefe71f91e855,},Annotations:map[string]stri
ng{io.kubernetes.container.hash: 7337c8d9,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9d3772211d8011c9a6554ddc5569f3920bbe3050b56a031062e0557cf43be0e2,PodSandboxId:206121b53ee872114e8fe65e58499c97a566a2df9439ac8dd4a51eaa92a99fa1,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,State:CONTAINER_EXITED,CreatedAt:1722901168281340148,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-multinode-342677,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 20dbf724a8b080d422a73b396072a4c7,},Annotations:map
[string]string{io.kubernetes.container.hash: bcb4918f,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=3d026b87-e793-4d5b-a9ad-c5aad2979df7 name=/runtime.v1.RuntimeService/ListContainers
	Aug 05 23:48:15 multinode-342677 crio[2886]: time="2024-08-05 23:48:15.153412713Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=89cd63f1-7551-40e3-917e-2a2f87a13e85 name=/runtime.v1.RuntimeService/Version
	Aug 05 23:48:15 multinode-342677 crio[2886]: time="2024-08-05 23:48:15.153488474Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=89cd63f1-7551-40e3-917e-2a2f87a13e85 name=/runtime.v1.RuntimeService/Version
	Aug 05 23:48:15 multinode-342677 crio[2886]: time="2024-08-05 23:48:15.155297893Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=071e392f-b8c8-4b65-86f7-0199ba9fcaea name=/runtime.v1.ImageService/ImageFsInfo
	Aug 05 23:48:15 multinode-342677 crio[2886]: time="2024-08-05 23:48:15.155928420Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1722901695155902951,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:143053,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=071e392f-b8c8-4b65-86f7-0199ba9fcaea name=/runtime.v1.ImageService/ImageFsInfo
	Aug 05 23:48:15 multinode-342677 crio[2886]: time="2024-08-05 23:48:15.156660234Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=98597432-6c42-48cc-9ed0-cef2cb45b899 name=/runtime.v1.RuntimeService/ListContainers
	Aug 05 23:48:15 multinode-342677 crio[2886]: time="2024-08-05 23:48:15.156756607Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=98597432-6c42-48cc-9ed0-cef2cb45b899 name=/runtime.v1.RuntimeService/ListContainers
	Aug 05 23:48:15 multinode-342677 crio[2886]: time="2024-08-05 23:48:15.157223632Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:a0e5a6b7ec4cef49b6d8fde0b12859c53f333ffac6eb59a728ac65e9274ba3bf,PodSandboxId:9c058b71634ff24060e0e8b0c1b24a92c2863e78046e35171cf27bb43980ef81,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1722901625795332746,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-fc5497c4f-78mt7,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 2761ea7e-d8a2-40d3-bd8d-a2e484b0bec3,},Annotations:map[string]string{io.kubernetes.container.hash: 5d980674,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessage
Path: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b62f62a176c70a254e48450706f9b7524e717202076210725309a1e6c28138bc,PodSandboxId:ee8ad4fa44950fa02aa006da747e47186dc3d9aa497736cc81b26273582092da,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:1,},Image:&ImageSpec{Image:917d7814b9b5b870a04ef7bbee5414485a7ba8385be9c26e1df27463f6fb6557,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:917d7814b9b5b870a04ef7bbee5414485a7ba8385be9c26e1df27463f6fb6557,State:CONTAINER_RUNNING,CreatedAt:1722901592221740056,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-6c596,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a8a66d1c-c60f-4a75-8104-151faf7922b9,},Annotations:map[string]string{io.kubernetes.container.hash: 4ff057c6,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kube
rnetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6797a9b46983671da1ba766fb34f5be198a6f9d393ac2f171339c0def77c28e1,PodSandboxId:50b50402089f5ef893f2e1020443ac3740204eafd51c0bb3b0c1a95387d4a4f4,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1722901592169187259,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-v42dl,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f82457c8-44fc-476d-828b-ac33899c132b,},Annotations:map[string]string{io.kubernetes.container.hash: 66c772fe,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\
":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3b79ca8015a145db755007359177d373f8fb63ee8d261e67f64838e7af497133,PodSandboxId:f794a28dee848aa9a6e5f529ff4b0ac13bcdc20f7efc577d10f83af4d0a7f96e,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,State:CONTAINER_RUNNING,CreatedAt:1722901591997430400,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-2dnzb,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ddda1087-36af-4e82-88d3-54e6348c5e22,},Annotations:map[string]
string{io.kubernetes.container.hash: 10979fcd,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a5a92de8354098f5dc94e8aa94bb5d5aad51d11e8a6025988fd20a80568eee49,PodSandboxId:b37ed3d64fd18e7e93f99b002137d4d76775a81143bf44397c5edabd2b86fcc0,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1722901591963793102,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 71064ea8-4354-4f74-9efc-52487675def4,},Annotations:map[string]string{io.ku
bernetes.container.hash: f7b2680,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:943c42b387fc724e757ea4b76361e6a758b577b524c7c10390b65369cea51422,PodSandboxId:054f8fceeb7cdcfd41e776e32bd55b59140891b42d00369b60b9aaad1a58465c,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,State:CONTAINER_RUNNING,CreatedAt:1722901588232266700,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-multinode-342677,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2f1a0b5192f07729588fefe71f91e855,},Annotations:map[string]string{io.kubernetes.contai
ner.hash: 7337c8d9,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:268fef5c96aefcf57fc17aa09c4ebf2c737c37b5bdc83fe67a396bfa1b804384,PodSandboxId:365c7bc7a2b1ca350880d983794b4c6feb8597b97f66dfc3b8a048ac9720136c,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1722901588210215125,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-multinode-342677,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: fe9d35cc8086dee57d5df88c8a99e7d8,},Annotations:map[string]string{io.kubernetes.container.hash: f19e9654,io.kubernetes.container.r
estartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:45dfe807a437e2efa653406b8d23df5726dff92deecdb42360742ab37c64c201,PodSandboxId:ac8cb2ec338da417c69fee22ef95616b800021fb34ffc5c70712e3fcbf35a0d0,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,State:CONTAINER_RUNNING,CreatedAt:1722901588202172871,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-multinode-342677,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 20dbf724a8b080d422a73b396072a4c7,},Annotations:map[string]string{io.kubernetes.container.hash: bcb4918f,io.kubernete
s.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:bdaf3015949ce621acf67c07735918381578c9af19ebd3e5221f87a4cd2af079,PodSandboxId:3b873b27fc9d25b6c540d99c09926b8080f05d4f2343f9b81ce0b3b945380ea9,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,State:CONTAINER_RUNNING,CreatedAt:1722901588169836346,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-multinode-342677,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: efe0bdf356940b826a4ba3b020e6529c,},Annotations:map[string]string{io.kubernetes.container.hash: d82bc00b,io.kubernetes.container.re
startCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4ca4e362daa3cf637429cc280868001a87ead2a1c6b86c42ca8880864eb2b33b,PodSandboxId:cbe52e52d544551a4eaeb48f07902ba0252b2e562e0b7426cc66a20762a4a053,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_EXITED,CreatedAt:1722901260953256689,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-fc5497c4f-78mt7,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 2761ea7e-d8a2-40d3-bd8d-a2e484b0bec3,},Annotations:map[string]string{io.kubernetes.container.hash: 5d980674,io.kubernetes.container.rest
artCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:00b5601d99857e510793bf69888cd3fe706ae1919ea2fa58cd1e49e2cc8fe8f3,PodSandboxId:3d9e1ffaf8822a12d115003c1883d1817957a0bcb4e4d516649e4b91ab06ba3c,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1722901206217934742,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-v42dl,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f82457c8-44fc-476d-828b-ac33899c132b,},Annotations:map[string]string{io.kubernetes.container.hash: 66c772fe,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containe
rPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c8b4c139ba9f34045731ac7ff528df9d7aebaf6e5e682eeac1c47b5710313379,PodSandboxId:a8259144d8379242d353a86d5adec712cc26b0a08e440fabe5668e9603e2a7e1,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1722901204667548032,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.n
amespace: kube-system,io.kubernetes.pod.uid: 71064ea8-4354-4f74-9efc-52487675def4,},Annotations:map[string]string{io.kubernetes.container.hash: f7b2680,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:150ce9e294a897bb1eee154f726f0956df1618247219dcd049722c011dbe331e,PodSandboxId:0e5f1c11948a79d3e2c7d6179de4c73e196695f95a60f9892b69f6ec45c16d38,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:docker.io/kindest/kindnetd@sha256:4067b91686869e19bac601aec305ba55d2e74cdcb91347869bfb4fd3a26cd3c3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:917d7814b9b5b870a04ef7bbee5414485a7ba8385be9c26e1df27463f6fb6557,State:CONTAINER_EXITED,CreatedAt:1722901192992033098,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-6c596,io.kubernetes.pod.nam
espace: kube-system,io.kubernetes.pod.uid: a8a66d1c-c60f-4a75-8104-151faf7922b9,},Annotations:map[string]string{io.kubernetes.container.hash: 4ff057c6,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3b1d0ef18e29d3787609be51f754e7f2324ee16d19d999762bac401d079a7fd2,PodSandboxId:353f772917eac257829637e206a259eb1f44afeea71efacfab5ce5d5af8892b0,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,State:CONTAINER_EXITED,CreatedAt:1722901189044346374,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-2dnzb,io.kubernetes.pod.namespace: kube-system,io.kubernetes.
pod.uid: ddda1087-36af-4e82-88d3-54e6348c5e22,},Annotations:map[string]string{io.kubernetes.container.hash: 10979fcd,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:30e13b94e51e4836e65d865d70745d086a906658385b8b067fe0d8e69095705e,PodSandboxId:18c7ba91ecfa22ab34982fbb76f08587f43a0f966129ab03ec78113f3a756e1c,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_EXITED,CreatedAt:1722901168318903940,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-multinode-342677,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: fe9d35cc8086dee57d5df88c8a99e7d8
,},Annotations:map[string]string{io.kubernetes.container.hash: f19e9654,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f227e8cf03b66f737d02e2c7b817576ad72901aa61a0e63d337fb36ec9c32943,PodSandboxId:08f166f93ecd6c382e72917c7c4f41f7606ffe9bf055ba92daa336772820b451,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,State:CONTAINER_EXITED,CreatedAt:1722901168373329147,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-multinode-342677,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: efe0bdf356940b826a4ba3b020e6529c,},Annotations:
map[string]string{io.kubernetes.container.hash: d82bc00b,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5cc7242052f30bef2f21e600e245b76900de63c25a681c55c467489b4bb4cad9,PodSandboxId:1cfcc3af05ebb8e14183a8bcbdba732a57b181af412a613bd2bbd2579cebbef4,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,State:CONTAINER_EXITED,CreatedAt:1722901168327637273,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-multinode-342677,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2f1a0b5192f07729588fefe71f91e855,},Annotations:map[string]stri
ng{io.kubernetes.container.hash: 7337c8d9,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9d3772211d8011c9a6554ddc5569f3920bbe3050b56a031062e0557cf43be0e2,PodSandboxId:206121b53ee872114e8fe65e58499c97a566a2df9439ac8dd4a51eaa92a99fa1,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,State:CONTAINER_EXITED,CreatedAt:1722901168281340148,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-multinode-342677,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 20dbf724a8b080d422a73b396072a4c7,},Annotations:map
[string]string{io.kubernetes.container.hash: bcb4918f,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=98597432-6c42-48cc-9ed0-cef2cb45b899 name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                 CREATED              STATE               NAME                      ATTEMPT             POD ID              POD
	a0e5a6b7ec4ce       8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a                                      About a minute ago   Running             busybox                   1                   9c058b71634ff       busybox-fc5497c4f-78mt7
	b62f62a176c70       917d7814b9b5b870a04ef7bbee5414485a7ba8385be9c26e1df27463f6fb6557                                      About a minute ago   Running             kindnet-cni               1                   ee8ad4fa44950       kindnet-6c596
	6797a9b469836       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4                                      About a minute ago   Running             coredns                   1                   50b50402089f5       coredns-7db6d8ff4d-v42dl
	3b79ca8015a14       55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1                                      About a minute ago   Running             kube-proxy                1                   f794a28dee848       kube-proxy-2dnzb
	a5a92de835409       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                      About a minute ago   Running             storage-provisioner       1                   b37ed3d64fd18       storage-provisioner
	943c42b387fc7       3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2                                      About a minute ago   Running             kube-scheduler            1                   054f8fceeb7cd       kube-scheduler-multinode-342677
	268fef5c96aef       3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899                                      About a minute ago   Running             etcd                      1                   365c7bc7a2b1c       etcd-multinode-342677
	45dfe807a437e       76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e                                      About a minute ago   Running             kube-controller-manager   1                   ac8cb2ec338da       kube-controller-manager-multinode-342677
	bdaf3015949ce       1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d                                      About a minute ago   Running             kube-apiserver            1                   3b873b27fc9d2       kube-apiserver-multinode-342677
	4ca4e362daa3c       gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335   7 minutes ago        Exited              busybox                   0                   cbe52e52d5445       busybox-fc5497c4f-78mt7
	00b5601d99857       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4                                      8 minutes ago        Exited              coredns                   0                   3d9e1ffaf8822       coredns-7db6d8ff4d-v42dl
	c8b4c139ba9f3       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                      8 minutes ago        Exited              storage-provisioner       0                   a8259144d8379       storage-provisioner
	150ce9e294a89       docker.io/kindest/kindnetd@sha256:4067b91686869e19bac601aec305ba55d2e74cdcb91347869bfb4fd3a26cd3c3    8 minutes ago        Exited              kindnet-cni               0                   0e5f1c11948a7       kindnet-6c596
	3b1d0ef18e29d       55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1                                      8 minutes ago        Exited              kube-proxy                0                   353f772917eac       kube-proxy-2dnzb
	f227e8cf03b66       1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d                                      8 minutes ago        Exited              kube-apiserver            0                   08f166f93ecd6       kube-apiserver-multinode-342677
	5cc7242052f30       3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2                                      8 minutes ago        Exited              kube-scheduler            0                   1cfcc3af05ebb       kube-scheduler-multinode-342677
	30e13b94e51e4       3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899                                      8 minutes ago        Exited              etcd                      0                   18c7ba91ecfa2       etcd-multinode-342677
	9d3772211d801       76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e                                      8 minutes ago        Exited              kube-controller-manager   0                   206121b53ee87       kube-controller-manager-multinode-342677
	
	
	==> coredns [00b5601d99857e510793bf69888cd3fe706ae1919ea2fa58cd1e49e2cc8fe8f3] <==
	[INFO] 10.244.1.2:34450 - 3 "AAAA IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.001788569s
	[INFO] 10.244.1.2:41932 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000154255s
	[INFO] 10.244.1.2:57045 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.00012768s
	[INFO] 10.244.1.2:41817 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.001243242s
	[INFO] 10.244.1.2:41711 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000085796s
	[INFO] 10.244.1.2:41750 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000118369s
	[INFO] 10.244.1.2:59875 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000096696s
	[INFO] 10.244.0.3:44805 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.00015438s
	[INFO] 10.244.0.3:34075 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000079664s
	[INFO] 10.244.0.3:44640 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000076905s
	[INFO] 10.244.0.3:49003 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000080563s
	[INFO] 10.244.1.2:42631 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000168788s
	[INFO] 10.244.1.2:53592 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000121393s
	[INFO] 10.244.1.2:37536 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000076273s
	[INFO] 10.244.1.2:37579 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000102808s
	[INFO] 10.244.0.3:51697 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000108269s
	[INFO] 10.244.0.3:51895 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000113953s
	[INFO] 10.244.0.3:48174 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000083572s
	[INFO] 10.244.0.3:36200 - 5 "PTR IN 1.39.168.192.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.000083405s
	[INFO] 10.244.1.2:56466 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000160083s
	[INFO] 10.244.1.2:59177 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000182601s
	[INFO] 10.244.1.2:32771 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000109942s
	[INFO] 10.244.1.2:56161 - 5 "PTR IN 1.39.168.192.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.000084893s
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/health: Going into lameduck mode for 5s
	
	
	==> coredns [6797a9b46983671da1ba766fb34f5be198a6f9d393ac2f171339c0def77c28e1] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 6c8bd46af3d98e03c4ae8e438c65dd0c69a5f817565481bcf1725dd66ff794963b7938c81e3a23d4c2ad9e52f818076e819219c79e8007dd90564767ed68ba4c
	CoreDNS-1.11.1
	linux/amd64, go1.20.7, ae2bbc2
	[INFO] 127.0.0.1:57926 - 14732 "HINFO IN 843478541876508552.7132016438858336143. udp 56 false 512" NXDOMAIN qr,rd,ra 56 0.015236044s
	
	
	==> describe nodes <==
	Name:               multinode-342677
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=multinode-342677
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=a179f531dd2dbe55e0d6074abcbc378280f91bb4
	                    minikube.k8s.io/name=multinode-342677
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_08_05T23_39_35_0700
	                    minikube.k8s.io/version=v1.33.1
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 05 Aug 2024 23:39:31 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  multinode-342677
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 05 Aug 2024 23:48:13 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 05 Aug 2024 23:46:31 +0000   Mon, 05 Aug 2024 23:39:29 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 05 Aug 2024 23:46:31 +0000   Mon, 05 Aug 2024 23:39:29 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 05 Aug 2024 23:46:31 +0000   Mon, 05 Aug 2024 23:39:29 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 05 Aug 2024 23:46:31 +0000   Mon, 05 Aug 2024 23:40:04 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.10
	  Hostname:    multinode-342677
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 57f45a9d11da491e8779a6849117c573
	  System UUID:                57f45a9d-11da-491e-8779-a6849117c573
	  Boot ID:                    a21c39f9-4ec3-4075-86c4-15b50cfc820e
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.30.3
	  Kube-Proxy Version:         v1.30.3
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (9 in total)
	  Namespace                   Name                                        CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                        ------------  ----------  ---------------  -------------  ---
	  default                     busybox-fc5497c4f-78mt7                     0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         7m18s
	  kube-system                 coredns-7db6d8ff4d-v42dl                    100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     8m27s
	  kube-system                 etcd-multinode-342677                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (4%!)(MISSING)       0 (0%!)(MISSING)         8m41s
	  kube-system                 kindnet-6c596                               100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (2%!)(MISSING)        50Mi (2%!)(MISSING)      8m27s
	  kube-system                 kube-apiserver-multinode-342677             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         8m41s
	  kube-system                 kube-controller-manager-multinode-342677    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         8m43s
	  kube-system                 kube-proxy-2dnzb                            0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         8m28s
	  kube-system                 kube-scheduler-multinode-342677             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         8m41s
	  kube-system                 storage-provisioner                         0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         8m27s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                850m (42%!)(MISSING)   100m (5%!)(MISSING)
	  memory             220Mi (10%!)(MISSING)  220Mi (10%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)       0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)       0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                  From             Message
	  ----    ------                   ----                 ----             -------
	  Normal  Starting                 8m26s                kube-proxy       
	  Normal  Starting                 103s                 kube-proxy       
	  Normal  NodeHasSufficientPID     8m42s                kubelet          Node multinode-342677 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  8m42s                kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  8m42s                kubelet          Node multinode-342677 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    8m42s                kubelet          Node multinode-342677 status is now: NodeHasNoDiskPressure
	  Normal  Starting                 8m42s                kubelet          Starting kubelet.
	  Normal  RegisteredNode           8m28s                node-controller  Node multinode-342677 event: Registered Node multinode-342677 in Controller
	  Normal  NodeReady                8m11s                kubelet          Node multinode-342677 status is now: NodeReady
	  Normal  Starting                 108s                 kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  108s (x8 over 108s)  kubelet          Node multinode-342677 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    108s (x8 over 108s)  kubelet          Node multinode-342677 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     108s (x7 over 108s)  kubelet          Node multinode-342677 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  108s                 kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           92s                  node-controller  Node multinode-342677 event: Registered Node multinode-342677 in Controller
	
	
	Name:               multinode-342677-m02
	Roles:              <none>
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=multinode-342677-m02
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=a179f531dd2dbe55e0d6074abcbc378280f91bb4
	                    minikube.k8s.io/name=multinode-342677
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_08_05T23_47_12_0700
	                    minikube.k8s.io/version=v1.33.1
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 05 Aug 2024 23:47:12 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  multinode-342677-m02
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 05 Aug 2024 23:48:14 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 05 Aug 2024 23:47:43 +0000   Mon, 05 Aug 2024 23:47:12 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 05 Aug 2024 23:47:43 +0000   Mon, 05 Aug 2024 23:47:12 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 05 Aug 2024 23:47:43 +0000   Mon, 05 Aug 2024 23:47:12 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 05 Aug 2024 23:47:43 +0000   Mon, 05 Aug 2024 23:47:32 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.89
	  Hostname:    multinode-342677-m02
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 800cc43d422f47829d391512715fe306
	  System UUID:                800cc43d-422f-4782-9d39-1512715fe306
	  Boot ID:                    ab3dfc9b-1b31-4ead-946b-bdf9e2156dba
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.30.3
	  Kube-Proxy Version:         v1.30.3
	PodCIDR:                      10.244.1.0/24
	PodCIDRs:                     10.244.1.0/24
	Non-terminated Pods:          (3 in total)
	  Namespace                   Name                       CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                       ------------  ----------  ---------------  -------------  ---
	  default                     busybox-fc5497c4f-98dgl    0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         67s
	  kube-system                 kindnet-kw6xt              100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (2%!)(MISSING)        50Mi (2%!)(MISSING)      7m40s
	  kube-system                 kube-proxy-ktlwn           0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         7m40s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests   Limits
	  --------           --------   ------
	  cpu                100m (5%!)(MISSING)  100m (5%!)(MISSING)
	  memory             50Mi (2%!)(MISSING)  50Mi (2%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)     0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)     0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                    From        Message
	  ----    ------                   ----                   ----        -------
	  Normal  Starting                 7m35s                  kube-proxy  
	  Normal  Starting                 58s                    kube-proxy  
	  Normal  NodeHasSufficientMemory  7m40s (x2 over 7m40s)  kubelet     Node multinode-342677-m02 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    7m40s (x2 over 7m40s)  kubelet     Node multinode-342677-m02 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     7m40s (x2 over 7m40s)  kubelet     Node multinode-342677-m02 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  7m40s                  kubelet     Updated Node Allocatable limit across pods
	  Normal  NodeReady                7m20s                  kubelet     Node multinode-342677-m02 status is now: NodeReady
	  Normal  NodeHasSufficientMemory  63s (x2 over 63s)      kubelet     Node multinode-342677-m02 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    63s (x2 over 63s)      kubelet     Node multinode-342677-m02 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     63s (x2 over 63s)      kubelet     Node multinode-342677-m02 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  63s                    kubelet     Updated Node Allocatable limit across pods
	  Normal  NodeReady                43s                    kubelet     Node multinode-342677-m02 status is now: NodeReady
	
	
	Name:               multinode-342677-m03
	Roles:              <none>
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=multinode-342677-m03
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=a179f531dd2dbe55e0d6074abcbc378280f91bb4
	                    minikube.k8s.io/name=multinode-342677
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_08_05T23_47_53_0700
	                    minikube.k8s.io/version=v1.33.1
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 05 Aug 2024 23:47:52 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  multinode-342677-m03
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 05 Aug 2024 23:48:12 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 05 Aug 2024 23:48:12 +0000   Mon, 05 Aug 2024 23:47:52 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 05 Aug 2024 23:48:12 +0000   Mon, 05 Aug 2024 23:47:52 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 05 Aug 2024 23:48:12 +0000   Mon, 05 Aug 2024 23:47:52 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 05 Aug 2024 23:48:12 +0000   Mon, 05 Aug 2024 23:48:12 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.75
	  Hostname:    multinode-342677-m03
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 784355de4b494f3b87f01a77e6bb51f0
	  System UUID:                784355de-4b49-4f3b-87f0-1a77e6bb51f0
	  Boot ID:                    d58c837e-c515-405e-b8a9-3e1bec8c54a2
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.30.3
	  Kube-Proxy Version:         v1.30.3
	PodCIDR:                      10.244.2.0/24
	PodCIDRs:                     10.244.2.0/24
	Non-terminated Pods:          (2 in total)
	  Namespace                   Name                CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                ------------  ----------  ---------------  -------------  ---
	  kube-system                 kindnet-rbtpm       100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (2%!)(MISSING)        50Mi (2%!)(MISSING)      6m45s
	  kube-system                 kube-proxy-rqbsd    0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         6m45s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests   Limits
	  --------           --------   ------
	  cpu                100m (5%!)(MISSING)  100m (5%!)(MISSING)
	  memory             50Mi (2%!)(MISSING)  50Mi (2%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)     0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)     0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                    From        Message
	  ----    ------                   ----                   ----        -------
	  Normal  Starting                 6m40s                  kube-proxy  
	  Normal  Starting                 18s                    kube-proxy  
	  Normal  Starting                 5m51s                  kube-proxy  
	  Normal  NodeHasSufficientMemory  6m46s (x2 over 6m46s)  kubelet     Node multinode-342677-m03 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    6m46s (x2 over 6m46s)  kubelet     Node multinode-342677-m03 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     6m46s (x2 over 6m46s)  kubelet     Node multinode-342677-m03 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  6m45s                  kubelet     Updated Node Allocatable limit across pods
	  Normal  NodeReady                6m25s                  kubelet     Node multinode-342677-m03 status is now: NodeReady
	  Normal  NodeHasSufficientPID     5m55s (x2 over 5m55s)  kubelet     Node multinode-342677-m03 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  5m55s                  kubelet     Updated Node Allocatable limit across pods
	  Normal  NodeHasNoDiskPressure    5m55s (x2 over 5m55s)  kubelet     Node multinode-342677-m03 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientMemory  5m55s (x2 over 5m55s)  kubelet     Node multinode-342677-m03 status is now: NodeHasSufficientMemory
	  Normal  NodeReady                5m36s                  kubelet     Node multinode-342677-m03 status is now: NodeReady
	  Normal  NodeHasSufficientMemory  23s (x2 over 23s)      kubelet     Node multinode-342677-m03 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    23s (x2 over 23s)      kubelet     Node multinode-342677-m03 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     23s (x2 over 23s)      kubelet     Node multinode-342677-m03 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  23s                    kubelet     Updated Node Allocatable limit across pods
	  Normal  NodeReady                3s                     kubelet     Node multinode-342677-m03 status is now: NodeReady
	
	
	==> dmesg <==
	[  +0.175173] systemd-fstab-generator[623]: Ignoring "noauto" option for root device
	[  +0.109595] systemd-fstab-generator[635]: Ignoring "noauto" option for root device
	[  +0.267886] systemd-fstab-generator[665]: Ignoring "noauto" option for root device
	[  +4.326511] systemd-fstab-generator[762]: Ignoring "noauto" option for root device
	[  +0.063755] kauditd_printk_skb: 130 callbacks suppressed
	[  +4.754069] systemd-fstab-generator[951]: Ignoring "noauto" option for root device
	[  +0.561200] kauditd_printk_skb: 46 callbacks suppressed
	[  +5.978599] systemd-fstab-generator[1285]: Ignoring "noauto" option for root device
	[  +0.086658] kauditd_printk_skb: 41 callbacks suppressed
	[ +14.181258] systemd-fstab-generator[1475]: Ignoring "noauto" option for root device
	[  +0.134859] kauditd_printk_skb: 21 callbacks suppressed
	[  +5.010819] kauditd_printk_skb: 51 callbacks suppressed
	[Aug 5 23:40] kauditd_printk_skb: 14 callbacks suppressed
	[Aug 5 23:46] systemd-fstab-generator[2805]: Ignoring "noauto" option for root device
	[  +0.150293] systemd-fstab-generator[2817]: Ignoring "noauto" option for root device
	[  +0.195896] systemd-fstab-generator[2831]: Ignoring "noauto" option for root device
	[  +0.159146] systemd-fstab-generator[2843]: Ignoring "noauto" option for root device
	[  +0.289437] systemd-fstab-generator[2871]: Ignoring "noauto" option for root device
	[  +8.138736] systemd-fstab-generator[2971]: Ignoring "noauto" option for root device
	[  +0.082286] kauditd_printk_skb: 100 callbacks suppressed
	[  +2.221062] systemd-fstab-generator[3092]: Ignoring "noauto" option for root device
	[  +4.619997] kauditd_printk_skb: 74 callbacks suppressed
	[ +12.037641] kauditd_printk_skb: 32 callbacks suppressed
	[  +4.116201] systemd-fstab-generator[3924]: Ignoring "noauto" option for root device
	[Aug 5 23:47] kauditd_printk_skb: 14 callbacks suppressed
	
	
	==> etcd [268fef5c96aefcf57fc17aa09c4ebf2c737c37b5bdc83fe67a396bfa1b804384] <==
	{"level":"info","ts":"2024-08-05T23:46:28.662085Z","caller":"fileutil/purge.go:50","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/snap","suffix":"snap","max":5,"interval":"30s"}
	{"level":"info","ts":"2024-08-05T23:46:28.662112Z","caller":"fileutil/purge.go:50","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/wal","suffix":"wal","max":5,"interval":"30s"}
	{"level":"info","ts":"2024-08-05T23:46:28.662406Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"f8926bd555ec3d0e switched to configuration voters=(17911497232019635470)"}
	{"level":"info","ts":"2024-08-05T23:46:28.664834Z","caller":"membership/cluster.go:421","msg":"added member","cluster-id":"3a710b3f69152e32","local-member-id":"f8926bd555ec3d0e","added-peer-id":"f8926bd555ec3d0e","added-peer-peer-urls":["https://192.168.39.10:2380"]}
	{"level":"info","ts":"2024-08-05T23:46:28.668002Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"3a710b3f69152e32","local-member-id":"f8926bd555ec3d0e","cluster-version":"3.5"}
	{"level":"info","ts":"2024-08-05T23:46:28.668614Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2024-08-05T23:46:28.674132Z","caller":"embed/etcd.go:726","msg":"starting with client TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
	{"level":"info","ts":"2024-08-05T23:46:28.674339Z","caller":"embed/etcd.go:277","msg":"now serving peer/client/metrics","local-member-id":"f8926bd555ec3d0e","initial-advertise-peer-urls":["https://192.168.39.10:2380"],"listen-peer-urls":["https://192.168.39.10:2380"],"advertise-client-urls":["https://192.168.39.10:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.168.39.10:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2024-08-05T23:46:28.674389Z","caller":"embed/etcd.go:857","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2024-08-05T23:46:28.674548Z","caller":"embed/etcd.go:597","msg":"serving peer traffic","address":"192.168.39.10:2380"}
	{"level":"info","ts":"2024-08-05T23:46:28.674574Z","caller":"embed/etcd.go:569","msg":"cmux::serve","address":"192.168.39.10:2380"}
	{"level":"info","ts":"2024-08-05T23:46:30.017249Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"f8926bd555ec3d0e is starting a new election at term 2"}
	{"level":"info","ts":"2024-08-05T23:46:30.017296Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"f8926bd555ec3d0e became pre-candidate at term 2"}
	{"level":"info","ts":"2024-08-05T23:46:30.017334Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"f8926bd555ec3d0e received MsgPreVoteResp from f8926bd555ec3d0e at term 2"}
	{"level":"info","ts":"2024-08-05T23:46:30.017348Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"f8926bd555ec3d0e became candidate at term 3"}
	{"level":"info","ts":"2024-08-05T23:46:30.017357Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"f8926bd555ec3d0e received MsgVoteResp from f8926bd555ec3d0e at term 3"}
	{"level":"info","ts":"2024-08-05T23:46:30.017366Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"f8926bd555ec3d0e became leader at term 3"}
	{"level":"info","ts":"2024-08-05T23:46:30.017375Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: f8926bd555ec3d0e elected leader f8926bd555ec3d0e at term 3"}
	{"level":"info","ts":"2024-08-05T23:46:30.023417Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-08-05T23:46:30.024808Z","caller":"etcdserver/server.go:2068","msg":"published local member to cluster through raft","local-member-id":"f8926bd555ec3d0e","local-member-attributes":"{Name:multinode-342677 ClientURLs:[https://192.168.39.10:2379]}","request-path":"/0/members/f8926bd555ec3d0e/attributes","cluster-id":"3a710b3f69152e32","publish-timeout":"7s"}
	{"level":"info","ts":"2024-08-05T23:46:30.025518Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.39.10:2379"}
	{"level":"info","ts":"2024-08-05T23:46:30.025741Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-08-05T23:46:30.026087Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2024-08-05T23:46:30.026101Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2024-08-05T23:46:30.027565Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	
	
	==> etcd [30e13b94e51e4836e65d865d70745d086a906658385b8b067fe0d8e69095705e] <==
	{"level":"info","ts":"2024-08-05T23:39:29.692948Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: f8926bd555ec3d0e elected leader f8926bd555ec3d0e at term 2"}
	{"level":"info","ts":"2024-08-05T23:39:29.694589Z","caller":"etcdserver/server.go:2578","msg":"setting up initial cluster version using v2 API","cluster-version":"3.5"}
	{"level":"info","ts":"2024-08-05T23:39:29.695548Z","caller":"etcdserver/server.go:2068","msg":"published local member to cluster through raft","local-member-id":"f8926bd555ec3d0e","local-member-attributes":"{Name:multinode-342677 ClientURLs:[https://192.168.39.10:2379]}","request-path":"/0/members/f8926bd555ec3d0e/attributes","cluster-id":"3a710b3f69152e32","publish-timeout":"7s"}
	{"level":"info","ts":"2024-08-05T23:39:29.696159Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"3a710b3f69152e32","local-member-id":"f8926bd555ec3d0e","cluster-version":"3.5"}
	{"level":"info","ts":"2024-08-05T23:39:29.696261Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2024-08-05T23:39:29.696301Z","caller":"etcdserver/server.go:2602","msg":"cluster version is updated","cluster-version":"3.5"}
	{"level":"info","ts":"2024-08-05T23:39:29.696329Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-08-05T23:39:29.696786Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-08-05T23:39:29.697759Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2024-08-05T23:39:29.697795Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2024-08-05T23:39:29.698571Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.39.10:2379"}
	{"level":"info","ts":"2024-08-05T23:39:29.7001Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2024-08-05T23:40:35.807248Z","caller":"traceutil/trace.go:171","msg":"trace[265089564] transaction","detail":"{read_only:false; response_revision:455; number_of_response:1; }","duration":"154.024841ms","start":"2024-08-05T23:40:35.653191Z","end":"2024-08-05T23:40:35.807216Z","steps":["trace[265089564] 'process raft request'  (duration: 153.958786ms)"],"step_count":1}
	{"level":"info","ts":"2024-08-05T23:41:30.019621Z","caller":"traceutil/trace.go:171","msg":"trace[494470632] transaction","detail":"{read_only:false; response_revision:590; number_of_response:1; }","duration":"168.29581ms","start":"2024-08-05T23:41:29.851256Z","end":"2024-08-05T23:41:30.019552Z","steps":["trace[494470632] 'process raft request'  (duration: 167.217525ms)"],"step_count":1}
	{"level":"info","ts":"2024-08-05T23:41:30.02015Z","caller":"traceutil/trace.go:171","msg":"trace[1642343311] transaction","detail":"{read_only:false; response_revision:591; number_of_response:1; }","duration":"146.040922ms","start":"2024-08-05T23:41:29.874099Z","end":"2024-08-05T23:41:30.02014Z","steps":["trace[1642343311] 'process raft request'  (duration: 145.936898ms)"],"step_count":1}
	{"level":"info","ts":"2024-08-05T23:44:44.800342Z","caller":"osutil/interrupt_unix.go:64","msg":"received signal; shutting down","signal":"terminated"}
	{"level":"info","ts":"2024-08-05T23:44:44.800456Z","caller":"embed/etcd.go:375","msg":"closing etcd server","name":"multinode-342677","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.39.10:2380"],"advertise-client-urls":["https://192.168.39.10:2379"]}
	{"level":"warn","ts":"2024-08-05T23:44:44.80061Z","caller":"embed/serve.go:212","msg":"stopping secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
	{"level":"warn","ts":"2024-08-05T23:44:44.800752Z","caller":"embed/serve.go:214","msg":"stopped secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
	{"level":"warn","ts":"2024-08-05T23:44:44.88377Z","caller":"embed/serve.go:212","msg":"stopping secure grpc server due to error","error":"accept tcp 192.168.39.10:2379: use of closed network connection"}
	{"level":"warn","ts":"2024-08-05T23:44:44.883807Z","caller":"embed/serve.go:214","msg":"stopped secure grpc server due to error","error":"accept tcp 192.168.39.10:2379: use of closed network connection"}
	{"level":"info","ts":"2024-08-05T23:44:44.88386Z","caller":"etcdserver/server.go:1471","msg":"skipped leadership transfer for single voting member cluster","local-member-id":"f8926bd555ec3d0e","current-leader-member-id":"f8926bd555ec3d0e"}
	{"level":"info","ts":"2024-08-05T23:44:44.886738Z","caller":"embed/etcd.go:579","msg":"stopping serving peer traffic","address":"192.168.39.10:2380"}
	{"level":"info","ts":"2024-08-05T23:44:44.886878Z","caller":"embed/etcd.go:584","msg":"stopped serving peer traffic","address":"192.168.39.10:2380"}
	{"level":"info","ts":"2024-08-05T23:44:44.886887Z","caller":"embed/etcd.go:377","msg":"closed etcd server","name":"multinode-342677","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.39.10:2380"],"advertise-client-urls":["https://192.168.39.10:2379"]}
	
	
	==> kernel <==
	 23:48:15 up 9 min,  0 users,  load average: 0.41, 0.28, 0.15
	Linux multinode-342677 5.10.207 #1 SMP Mon Jul 29 15:19:02 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kindnet [150ce9e294a897bb1eee154f726f0956df1618247219dcd049722c011dbe331e] <==
	I0805 23:44:04.061505       1 main.go:322] Node multinode-342677-m02 has CIDR [10.244.1.0/24] 
	I0805 23:44:14.061009       1 main.go:295] Handling node with IPs: map[192.168.39.10:{}]
	I0805 23:44:14.061050       1 main.go:299] handling current node
	I0805 23:44:14.061069       1 main.go:295] Handling node with IPs: map[192.168.39.89:{}]
	I0805 23:44:14.061077       1 main.go:322] Node multinode-342677-m02 has CIDR [10.244.1.0/24] 
	I0805 23:44:14.061213       1 main.go:295] Handling node with IPs: map[192.168.39.75:{}]
	I0805 23:44:14.061219       1 main.go:322] Node multinode-342677-m03 has CIDR [10.244.3.0/24] 
	I0805 23:44:24.056383       1 main.go:295] Handling node with IPs: map[192.168.39.10:{}]
	I0805 23:44:24.056413       1 main.go:299] handling current node
	I0805 23:44:24.056426       1 main.go:295] Handling node with IPs: map[192.168.39.89:{}]
	I0805 23:44:24.056431       1 main.go:322] Node multinode-342677-m02 has CIDR [10.244.1.0/24] 
	I0805 23:44:24.056628       1 main.go:295] Handling node with IPs: map[192.168.39.75:{}]
	I0805 23:44:24.056634       1 main.go:322] Node multinode-342677-m03 has CIDR [10.244.3.0/24] 
	I0805 23:44:34.059766       1 main.go:295] Handling node with IPs: map[192.168.39.10:{}]
	I0805 23:44:34.059819       1 main.go:299] handling current node
	I0805 23:44:34.059846       1 main.go:295] Handling node with IPs: map[192.168.39.89:{}]
	I0805 23:44:34.059851       1 main.go:322] Node multinode-342677-m02 has CIDR [10.244.1.0/24] 
	I0805 23:44:34.060045       1 main.go:295] Handling node with IPs: map[192.168.39.75:{}]
	I0805 23:44:34.060055       1 main.go:322] Node multinode-342677-m03 has CIDR [10.244.3.0/24] 
	I0805 23:44:44.063998       1 main.go:295] Handling node with IPs: map[192.168.39.10:{}]
	I0805 23:44:44.064070       1 main.go:299] handling current node
	I0805 23:44:44.064095       1 main.go:295] Handling node with IPs: map[192.168.39.89:{}]
	I0805 23:44:44.064105       1 main.go:322] Node multinode-342677-m02 has CIDR [10.244.1.0/24] 
	I0805 23:44:44.064308       1 main.go:295] Handling node with IPs: map[192.168.39.75:{}]
	I0805 23:44:44.064347       1 main.go:322] Node multinode-342677-m03 has CIDR [10.244.3.0/24] 
	
	
	==> kindnet [b62f62a176c70a254e48450706f9b7524e717202076210725309a1e6c28138bc] <==
	I0805 23:47:33.259384       1 main.go:299] handling current node
	I0805 23:47:43.266823       1 main.go:295] Handling node with IPs: map[192.168.39.75:{}]
	I0805 23:47:43.266884       1 main.go:322] Node multinode-342677-m03 has CIDR [10.244.3.0/24] 
	I0805 23:47:43.267071       1 main.go:295] Handling node with IPs: map[192.168.39.10:{}]
	I0805 23:47:43.267105       1 main.go:299] handling current node
	I0805 23:47:43.267134       1 main.go:295] Handling node with IPs: map[192.168.39.89:{}]
	I0805 23:47:43.267142       1 main.go:322] Node multinode-342677-m02 has CIDR [10.244.1.0/24] 
	I0805 23:47:53.258971       1 main.go:295] Handling node with IPs: map[192.168.39.10:{}]
	I0805 23:47:53.259019       1 main.go:299] handling current node
	I0805 23:47:53.259053       1 main.go:295] Handling node with IPs: map[192.168.39.89:{}]
	I0805 23:47:53.259061       1 main.go:322] Node multinode-342677-m02 has CIDR [10.244.1.0/24] 
	I0805 23:47:53.259591       1 main.go:295] Handling node with IPs: map[192.168.39.75:{}]
	I0805 23:47:53.259608       1 main.go:322] Node multinode-342677-m03 has CIDR [10.244.2.0/24] 
	I0805 23:48:03.259101       1 main.go:295] Handling node with IPs: map[192.168.39.10:{}]
	I0805 23:48:03.259235       1 main.go:299] handling current node
	I0805 23:48:03.259269       1 main.go:295] Handling node with IPs: map[192.168.39.89:{}]
	I0805 23:48:03.259275       1 main.go:322] Node multinode-342677-m02 has CIDR [10.244.1.0/24] 
	I0805 23:48:03.259442       1 main.go:295] Handling node with IPs: map[192.168.39.75:{}]
	I0805 23:48:03.259468       1 main.go:322] Node multinode-342677-m03 has CIDR [10.244.2.0/24] 
	I0805 23:48:13.258556       1 main.go:295] Handling node with IPs: map[192.168.39.10:{}]
	I0805 23:48:13.258652       1 main.go:299] handling current node
	I0805 23:48:13.258732       1 main.go:295] Handling node with IPs: map[192.168.39.89:{}]
	I0805 23:48:13.258758       1 main.go:322] Node multinode-342677-m02 has CIDR [10.244.1.0/24] 
	I0805 23:48:13.258931       1 main.go:295] Handling node with IPs: map[192.168.39.75:{}]
	I0805 23:48:13.258971       1 main.go:322] Node multinode-342677-m03 has CIDR [10.244.2.0/24] 
	
	
	==> kube-apiserver [bdaf3015949ce621acf67c07735918381578c9af19ebd3e5221f87a4cd2af079] <==
	I0805 23:46:31.370626       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I0805 23:46:31.377518       1 shared_informer.go:320] Caches are synced for cluster_authentication_trust_controller
	I0805 23:46:31.377911       1 cache.go:39] Caches are synced for AvailableConditionController controller
	I0805 23:46:31.382906       1 shared_informer.go:320] Caches are synced for crd-autoregister
	I0805 23:46:31.390519       1 aggregator.go:165] initial CRD sync complete...
	I0805 23:46:31.390616       1 autoregister_controller.go:141] Starting autoregister controller
	I0805 23:46:31.390655       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I0805 23:46:31.390729       1 cache.go:39] Caches are synced for autoregister controller
	I0805 23:46:31.393736       1 shared_informer.go:320] Caches are synced for configmaps
	I0805 23:46:31.393867       1 apf_controller.go:379] Running API Priority and Fairness config worker
	I0805 23:46:31.393892       1 apf_controller.go:382] Running API Priority and Fairness periodic rebalancing process
	I0805 23:46:31.404845       1 handler_discovery.go:447] Starting ResourceDiscoveryManager
	E0805 23:46:31.428128       1 controller.go:97] Error removing old endpoints from kubernetes service: no API server IP addresses were listed in storage, refusing to erase all endpoints for the kubernetes Service
	I0805 23:46:31.442972       1 shared_informer.go:320] Caches are synced for node_authorizer
	I0805 23:46:31.454267       1 shared_informer.go:320] Caches are synced for *generic.policySource[*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicy,*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicyBinding,k8s.io/apiserver/pkg/admission/plugin/policy/validating.Validator]
	I0805 23:46:31.454382       1 policy_source.go:224] refreshing policies
	I0805 23:46:31.469376       1 controller.go:615] quota admission added evaluator for: leases.coordination.k8s.io
	I0805 23:46:32.279308       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I0805 23:46:33.429581       1 controller.go:615] quota admission added evaluator for: daemonsets.apps
	I0805 23:46:33.553178       1 controller.go:615] quota admission added evaluator for: serviceaccounts
	I0805 23:46:33.568025       1 controller.go:615] quota admission added evaluator for: deployments.apps
	I0805 23:46:33.646975       1 controller.go:615] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I0805 23:46:33.653566       1 controller.go:615] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I0805 23:46:43.869066       1 controller.go:615] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I0805 23:46:44.022724       1 controller.go:615] quota admission added evaluator for: endpoints
	
	
	==> kube-apiserver [f227e8cf03b66f737d02e2c7b817576ad72901aa61a0e63d337fb36ec9c32943] <==
	E0805 23:44:44.825501       1 watcher.go:342] watch chan error: rpc error: code = Unknown desc = malformed header: missing HTTP content-type
	E0805 23:44:44.825570       1 watcher.go:342] watch chan error: rpc error: code = Unknown desc = malformed header: missing HTTP content-type
	E0805 23:44:44.825655       1 watcher.go:342] watch chan error: rpc error: code = Unknown desc = malformed header: missing HTTP content-type
	W0805 23:44:44.825789       1 logging.go:59] [core] [Channel #64 SubChannel #65] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	E0805 23:44:44.825798       1 watcher.go:342] watch chan error: rpc error: code = Unknown desc = malformed header: missing HTTP content-type
	W0805 23:44:44.825841       1 logging.go:59] [core] [Channel #22 SubChannel #23] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0805 23:44:44.825877       1 logging.go:59] [core] [Channel #106 SubChannel #107] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	E0805 23:44:44.825877       1 watcher.go:342] watch chan error: rpc error: code = Unknown desc = malformed header: missing HTTP content-type
	W0805 23:44:44.825909       1 logging.go:59] [core] [Channel #76 SubChannel #77] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0805 23:44:44.825944       1 logging.go:59] [core] [Channel #85 SubChannel #86] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0805 23:44:44.826339       1 logging.go:59] [core] [Channel #97 SubChannel #98] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0805 23:44:44.826425       1 logging.go:59] [core] [Channel #181 SubChannel #182] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0805 23:44:44.826540       1 logging.go:59] [core] [Channel #61 SubChannel #62] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0805 23:44:44.826576       1 logging.go:59] [core] [Channel #118 SubChannel #119] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0805 23:44:44.826610       1 logging.go:59] [core] [Channel #115 SubChannel #116] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0805 23:44:44.826641       1 logging.go:59] [core] [Channel #133 SubChannel #134] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0805 23:44:44.826739       1 logging.go:59] [core] [Channel #46 SubChannel #47] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0805 23:44:44.826776       1 logging.go:59] [core] [Channel #139 SubChannel #140] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0805 23:44:44.826808       1 logging.go:59] [core] [Channel #124 SubChannel #125] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0805 23:44:44.826866       1 logging.go:59] [core] [Channel #145 SubChannel #146] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0805 23:44:44.826899       1 logging.go:59] [core] [Channel #130 SubChannel #131] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0805 23:44:44.826928       1 logging.go:59] [core] [Channel #160 SubChannel #161] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0805 23:44:44.826958       1 logging.go:59] [core] [Channel #28 SubChannel #29] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0805 23:44:44.826988       1 logging.go:59] [core] [Channel #103 SubChannel #104] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0805 23:44:44.827027       1 logging.go:59] [core] [Channel #169 SubChannel #170] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	
	
	==> kube-controller-manager [45dfe807a437e2efa653406b8d23df5726dff92deecdb42360742ab37c64c201] <==
	I0805 23:46:44.346552       1 shared_informer.go:320] Caches are synced for garbage collector
	I0805 23:46:44.404021       1 shared_informer.go:320] Caches are synced for garbage collector
	I0805 23:46:44.404068       1 garbagecollector.go:157] "All resource monitors have synced. Proceeding to collect garbage" logger="garbage-collector-controller"
	I0805 23:47:08.290242       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="31.915085ms"
	I0805 23:47:08.295750       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="5.36742ms"
	I0805 23:47:08.296096       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="79.764µs"
	I0805 23:47:12.516277       1 actual_state_of_world.go:543] "Failed to update statusUpdateNeeded field in actual state of world" logger="persistentvolume-attach-detach-controller" err="Failed to set statusUpdateNeeded to needed true, because nodeName=\"multinode-342677-m02\" does not exist"
	I0805 23:47:12.525088       1 range_allocator.go:381] "Set node PodCIDR" logger="node-ipam-controller" node="multinode-342677-m02" podCIDRs=["10.244.1.0/24"]
	I0805 23:47:14.287153       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="72.315µs"
	I0805 23:47:14.429188       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="45.535µs"
	I0805 23:47:14.442218       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="43.046µs"
	I0805 23:47:14.452449       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="44.912µs"
	I0805 23:47:14.498292       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="42.556µs"
	I0805 23:47:14.505810       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="41.066µs"
	I0805 23:47:14.507821       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="49.872µs"
	I0805 23:47:32.310853       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-342677-m02"
	I0805 23:47:32.332018       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="47.231µs"
	I0805 23:47:32.347365       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="42.798µs"
	I0805 23:47:35.696457       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="5.696021ms"
	I0805 23:47:35.696612       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="46.613µs"
	I0805 23:47:51.403049       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-342677-m02"
	I0805 23:47:52.584099       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-342677-m02"
	I0805 23:47:52.584326       1 actual_state_of_world.go:543] "Failed to update statusUpdateNeeded field in actual state of world" logger="persistentvolume-attach-detach-controller" err="Failed to set statusUpdateNeeded to needed true, because nodeName=\"multinode-342677-m03\" does not exist"
	I0805 23:47:52.612588       1 range_allocator.go:381] "Set node PodCIDR" logger="node-ipam-controller" node="multinode-342677-m03" podCIDRs=["10.244.2.0/24"]
	I0805 23:48:12.229963       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-342677-m02"
	
	
	==> kube-controller-manager [9d3772211d8011c9a6554ddc5569f3920bbe3050b56a031062e0557cf43be0e2] <==
	I0805 23:40:35.812173       1 actual_state_of_world.go:543] "Failed to update statusUpdateNeeded field in actual state of world" logger="persistentvolume-attach-detach-controller" err="Failed to set statusUpdateNeeded to needed true, because nodeName=\"multinode-342677-m02\" does not exist"
	I0805 23:40:35.824751       1 range_allocator.go:381] "Set node PodCIDR" logger="node-ipam-controller" node="multinode-342677-m02" podCIDRs=["10.244.1.0/24"]
	I0805 23:40:37.426743       1 node_lifecycle_controller.go:879] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="multinode-342677-m02"
	I0805 23:40:55.664315       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-342677-m02"
	I0805 23:40:57.947331       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="69.865169ms"
	I0805 23:40:57.958606       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="10.839024ms"
	I0805 23:40:57.959046       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="55.101µs"
	I0805 23:40:57.959269       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="33.026µs"
	I0805 23:41:01.262803       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="8.33961ms"
	I0805 23:41:01.263863       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="46.513µs"
	I0805 23:41:01.856846       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="9.622747ms"
	I0805 23:41:01.858076       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="54.259µs"
	I0805 23:41:30.022443       1 actual_state_of_world.go:543] "Failed to update statusUpdateNeeded field in actual state of world" logger="persistentvolume-attach-detach-controller" err="Failed to set statusUpdateNeeded to needed true, because nodeName=\"multinode-342677-m03\" does not exist"
	I0805 23:41:30.022595       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-342677-m02"
	I0805 23:41:30.077932       1 range_allocator.go:381] "Set node PodCIDR" logger="node-ipam-controller" node="multinode-342677-m03" podCIDRs=["10.244.2.0/24"]
	I0805 23:41:32.450638       1 node_lifecycle_controller.go:879] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="multinode-342677-m03"
	I0805 23:41:50.857843       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-342677-m02"
	I0805 23:42:19.291048       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-342677-m02"
	I0805 23:42:20.251487       1 actual_state_of_world.go:543] "Failed to update statusUpdateNeeded field in actual state of world" logger="persistentvolume-attach-detach-controller" err="Failed to set statusUpdateNeeded to needed true, because nodeName=\"multinode-342677-m03\" does not exist"
	I0805 23:42:20.251540       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-342677-m02"
	I0805 23:42:20.260213       1 range_allocator.go:381] "Set node PodCIDR" logger="node-ipam-controller" node="multinode-342677-m03" podCIDRs=["10.244.3.0/24"]
	I0805 23:42:39.087270       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-342677-m03"
	I0805 23:43:22.507117       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-342677-m02"
	I0805 23:43:22.571737       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="11.810912ms"
	I0805 23:43:22.571981       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="35.744µs"
	
	
	==> kube-proxy [3b1d0ef18e29d3787609be51f754e7f2324ee16d19d999762bac401d079a7fd2] <==
	I0805 23:39:49.263232       1 server_linux.go:69] "Using iptables proxy"
	I0805 23:39:49.288380       1 server.go:1062] "Successfully retrieved node IP(s)" IPs=["192.168.39.10"]
	I0805 23:39:49.342033       1 server_linux.go:143] "No iptables support for family" ipFamily="IPv6"
	I0805 23:39:49.342067       1 server.go:661] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0805 23:39:49.342083       1 server_linux.go:165] "Using iptables Proxier"
	I0805 23:39:49.345380       1 proxier.go:243] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I0805 23:39:49.346085       1 server.go:872] "Version info" version="v1.30.3"
	I0805 23:39:49.346132       1 server.go:874] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0805 23:39:49.349428       1 config.go:192] "Starting service config controller"
	I0805 23:39:49.349983       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0805 23:39:49.350044       1 config.go:101] "Starting endpoint slice config controller"
	I0805 23:39:49.350062       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0805 23:39:49.350895       1 config.go:319] "Starting node config controller"
	I0805 23:39:49.350933       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0805 23:39:49.450806       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I0805 23:39:49.450833       1 shared_informer.go:320] Caches are synced for service config
	I0805 23:39:49.451453       1 shared_informer.go:320] Caches are synced for node config
	
	
	==> kube-proxy [3b79ca8015a145db755007359177d373f8fb63ee8d261e67f64838e7af497133] <==
	I0805 23:46:32.266088       1 server_linux.go:69] "Using iptables proxy"
	I0805 23:46:32.300138       1 server.go:1062] "Successfully retrieved node IP(s)" IPs=["192.168.39.10"]
	I0805 23:46:32.408626       1 server_linux.go:143] "No iptables support for family" ipFamily="IPv6"
	I0805 23:46:32.409281       1 server.go:661] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0805 23:46:32.409364       1 server_linux.go:165] "Using iptables Proxier"
	I0805 23:46:32.414603       1 proxier.go:243] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I0805 23:46:32.414874       1 server.go:872] "Version info" version="v1.30.3"
	I0805 23:46:32.414903       1 server.go:874] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0805 23:46:32.417649       1 config.go:192] "Starting service config controller"
	I0805 23:46:32.417744       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0805 23:46:32.419444       1 config.go:101] "Starting endpoint slice config controller"
	I0805 23:46:32.419468       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0805 23:46:32.420533       1 config.go:319] "Starting node config controller"
	I0805 23:46:32.420560       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0805 23:46:32.520015       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I0805 23:46:32.520101       1 shared_informer.go:320] Caches are synced for service config
	I0805 23:46:32.520820       1 shared_informer.go:320] Caches are synced for node config
	
	
	==> kube-scheduler [5cc7242052f30bef2f21e600e245b76900de63c25a681c55c467489b4bb4cad9] <==
	E0805 23:39:32.056172       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	W0805 23:39:32.133509       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0805 23:39:32.133770       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	W0805 23:39:32.170758       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E0805 23:39:32.170855       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	W0805 23:39:32.193853       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E0805 23:39:32.193884       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	W0805 23:39:32.259144       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0805 23:39:32.259248       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	W0805 23:39:32.275972       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0805 23:39:32.276105       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	W0805 23:39:32.303521       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E0805 23:39:32.303618       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	W0805 23:39:32.316171       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0805 23:39:32.316214       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	W0805 23:39:32.362285       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0805 23:39:32.362333       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	W0805 23:39:32.363895       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0805 23:39:32.363967       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	W0805 23:39:32.395077       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0805 23:39:32.395164       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	W0805 23:39:32.554465       1 reflector.go:547] runtime/asm_amd64.s:1695: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0805 23:39:32.554867       1 reflector.go:150] runtime/asm_amd64.s:1695: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	I0805 23:39:35.184013       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	E0805 23:44:44.816277       1 run.go:74] "command failed" err="finished without leader elect"
	
	
	==> kube-scheduler [943c42b387fc724e757ea4b76361e6a758b577b524c7c10390b65369cea51422] <==
	I0805 23:46:29.145080       1 serving.go:380] Generated self-signed cert in-memory
	W0805 23:46:31.328860       1 requestheader_controller.go:193] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W0805 23:46:31.329016       1 authentication.go:368] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W0805 23:46:31.329153       1 authentication.go:369] Continuing without authentication configuration. This may treat all requests as anonymous.
	W0805 23:46:31.329351       1 authentication.go:370] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I0805 23:46:31.406082       1 server.go:154] "Starting Kubernetes Scheduler" version="v1.30.3"
	I0805 23:46:31.406145       1 server.go:156] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0805 23:46:31.410413       1 secure_serving.go:213] Serving securely on 127.0.0.1:10259
	I0805 23:46:31.410786       1 configmap_cafile_content.go:202] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0805 23:46:31.413769       1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0805 23:46:31.413892       1 tlsconfig.go:240] "Starting DynamicServingCertificateController"
	I0805 23:46:31.515919       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Aug 05 23:46:28 multinode-342677 kubelet[3099]: E0805 23:46:28.504389    3099 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%!D(MISSING)multinode-342677&limit=500&resourceVersion=0": dial tcp 192.168.39.10:8443: connect: connection refused
	Aug 05 23:46:28 multinode-342677 kubelet[3099]: I0805 23:46:28.988643    3099 kubelet_node_status.go:73] "Attempting to register node" node="multinode-342677"
	Aug 05 23:46:31 multinode-342677 kubelet[3099]: I0805 23:46:31.461296    3099 apiserver.go:52] "Watching apiserver"
	Aug 05 23:46:31 multinode-342677 kubelet[3099]: I0805 23:46:31.464284    3099 topology_manager.go:215] "Topology Admit Handler" podUID="ddda1087-36af-4e82-88d3-54e6348c5e22" podNamespace="kube-system" podName="kube-proxy-2dnzb"
	Aug 05 23:46:31 multinode-342677 kubelet[3099]: I0805 23:46:31.464434    3099 topology_manager.go:215] "Topology Admit Handler" podUID="f82457c8-44fc-476d-828b-ac33899c132b" podNamespace="kube-system" podName="coredns-7db6d8ff4d-v42dl"
	Aug 05 23:46:31 multinode-342677 kubelet[3099]: I0805 23:46:31.464521    3099 topology_manager.go:215] "Topology Admit Handler" podUID="a8a66d1c-c60f-4a75-8104-151faf7922b9" podNamespace="kube-system" podName="kindnet-6c596"
	Aug 05 23:46:31 multinode-342677 kubelet[3099]: I0805 23:46:31.464585    3099 topology_manager.go:215] "Topology Admit Handler" podUID="71064ea8-4354-4f74-9efc-52487675def4" podNamespace="kube-system" podName="storage-provisioner"
	Aug 05 23:46:31 multinode-342677 kubelet[3099]: I0805 23:46:31.464654    3099 topology_manager.go:215] "Topology Admit Handler" podUID="2761ea7e-d8a2-40d3-bd8d-a2e484b0bec3" podNamespace="default" podName="busybox-fc5497c4f-78mt7"
	Aug 05 23:46:31 multinode-342677 kubelet[3099]: I0805 23:46:31.479515    3099 desired_state_of_world_populator.go:157] "Finished populating initial desired state of world"
	Aug 05 23:46:31 multinode-342677 kubelet[3099]: I0805 23:46:31.541853    3099 kubelet_node_status.go:112] "Node was previously registered" node="multinode-342677"
	Aug 05 23:46:31 multinode-342677 kubelet[3099]: I0805 23:46:31.541962    3099 kubelet_node_status.go:76] "Successfully registered node" node="multinode-342677"
	Aug 05 23:46:31 multinode-342677 kubelet[3099]: I0805 23:46:31.543580    3099 kuberuntime_manager.go:1523] "Updating runtime config through cri with podcidr" CIDR="10.244.0.0/24"
	Aug 05 23:46:31 multinode-342677 kubelet[3099]: I0805 23:46:31.545002    3099 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="10.244.0.0/24"
	Aug 05 23:46:31 multinode-342677 kubelet[3099]: I0805 23:46:31.575983    3099 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-cfg\" (UniqueName: \"kubernetes.io/host-path/a8a66d1c-c60f-4a75-8104-151faf7922b9-cni-cfg\") pod \"kindnet-6c596\" (UID: \"a8a66d1c-c60f-4a75-8104-151faf7922b9\") " pod="kube-system/kindnet-6c596"
	Aug 05 23:46:31 multinode-342677 kubelet[3099]: I0805 23:46:31.576647    3099 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/ddda1087-36af-4e82-88d3-54e6348c5e22-lib-modules\") pod \"kube-proxy-2dnzb\" (UID: \"ddda1087-36af-4e82-88d3-54e6348c5e22\") " pod="kube-system/kube-proxy-2dnzb"
	Aug 05 23:46:31 multinode-342677 kubelet[3099]: I0805 23:46:31.577087    3099 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/a8a66d1c-c60f-4a75-8104-151faf7922b9-xtables-lock\") pod \"kindnet-6c596\" (UID: \"a8a66d1c-c60f-4a75-8104-151faf7922b9\") " pod="kube-system/kindnet-6c596"
	Aug 05 23:46:31 multinode-342677 kubelet[3099]: I0805 23:46:31.577371    3099 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/a8a66d1c-c60f-4a75-8104-151faf7922b9-lib-modules\") pod \"kindnet-6c596\" (UID: \"a8a66d1c-c60f-4a75-8104-151faf7922b9\") " pod="kube-system/kindnet-6c596"
	Aug 05 23:46:31 multinode-342677 kubelet[3099]: I0805 23:46:31.577581    3099 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/host-path/71064ea8-4354-4f74-9efc-52487675def4-tmp\") pod \"storage-provisioner\" (UID: \"71064ea8-4354-4f74-9efc-52487675def4\") " pod="kube-system/storage-provisioner"
	Aug 05 23:46:31 multinode-342677 kubelet[3099]: I0805 23:46:31.578286    3099 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/ddda1087-36af-4e82-88d3-54e6348c5e22-xtables-lock\") pod \"kube-proxy-2dnzb\" (UID: \"ddda1087-36af-4e82-88d3-54e6348c5e22\") " pod="kube-system/kube-proxy-2dnzb"
	Aug 05 23:46:36 multinode-342677 kubelet[3099]: I0805 23:46:36.161051    3099 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness"
	Aug 05 23:47:27 multinode-342677 kubelet[3099]: E0805 23:47:27.535433    3099 iptables.go:577] "Could not set up iptables canary" err=<
	Aug 05 23:47:27 multinode-342677 kubelet[3099]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Aug 05 23:47:27 multinode-342677 kubelet[3099]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Aug 05 23:47:27 multinode-342677 kubelet[3099]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Aug 05 23:47:27 multinode-342677 kubelet[3099]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	

                                                
                                                
-- /stdout --
** stderr ** 
	E0805 23:48:14.715112   49051 logs.go:258] failed to output last start logs: failed to read file /home/jenkins/minikube-integration/19373-9606/.minikube/logs/lastStart.txt: bufio.Scanner: token too long

                                                
                                                
** /stderr **
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p multinode-342677 -n multinode-342677
E0805 23:48:16.351628   16792 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19373-9606/.minikube/profiles/addons-435364/client.crt: no such file or directory
helpers_test.go:261: (dbg) Run:  kubectl --context multinode-342677 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestMultiNode/serial/RestartKeepsNodes FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
--- FAIL: TestMultiNode/serial/RestartKeepsNodes (334.79s)

                                                
                                    
x
+
TestMultiNode/serial/StopMultiNode (141.47s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopMultiNode
multinode_test.go:345: (dbg) Run:  out/minikube-linux-amd64 -p multinode-342677 stop
multinode_test.go:345: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-342677 stop: exit status 82 (2m0.466038636s)

                                                
                                                
-- stdout --
	* Stopping node "multinode-342677-m02"  ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_STOP_TIMEOUT: Unable to stop VM: Temporary Error: stop: unable to stop vm, current state "Running"
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_stop_24ef9d461bcadf056806ccb9bba8d5f9f54754a6_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
multinode_test.go:347: failed to stop cluster. args "out/minikube-linux-amd64 -p multinode-342677 stop": exit status 82
multinode_test.go:351: (dbg) Run:  out/minikube-linux-amd64 -p multinode-342677 status
multinode_test.go:351: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-342677 status: exit status 3 (18.882624152s)

                                                
                                                
-- stdout --
	multinode-342677
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-342677-m02
	type: Worker
	host: Error
	kubelet: Nonexistent
	

                                                
                                                
-- /stdout --
** stderr ** 
	E0805 23:50:38.151394   49722 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.39.89:22: connect: no route to host
	E0805 23:50:38.151433   49722 status.go:260] status error: NewSession: new client: new client: dial tcp 192.168.39.89:22: connect: no route to host

                                                
                                                
** /stderr **
multinode_test.go:354: failed to run minikube status. args "out/minikube-linux-amd64 -p multinode-342677 status" : exit status 3
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p multinode-342677 -n multinode-342677
helpers_test.go:244: <<< TestMultiNode/serial/StopMultiNode FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestMultiNode/serial/StopMultiNode]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p multinode-342677 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p multinode-342677 logs -n 25: (1.45833833s)
helpers_test.go:252: TestMultiNode/serial/StopMultiNode logs: 
-- stdout --
	
	==> Audit <==
	|---------|-----------------------------------------------------------------------------------------|------------------|---------|---------|---------------------|---------------------|
	| Command |                                          Args                                           |     Profile      |  User   | Version |     Start Time      |      End Time       |
	|---------|-----------------------------------------------------------------------------------------|------------------|---------|---------|---------------------|---------------------|
	| ssh     | multinode-342677 ssh -n                                                                 | multinode-342677 | jenkins | v1.33.1 | 05 Aug 24 23:41 UTC | 05 Aug 24 23:41 UTC |
	|         | multinode-342677-m02 sudo cat                                                           |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |         |                     |                     |
	| cp      | multinode-342677 cp multinode-342677-m02:/home/docker/cp-test.txt                       | multinode-342677 | jenkins | v1.33.1 | 05 Aug 24 23:41 UTC | 05 Aug 24 23:41 UTC |
	|         | multinode-342677:/home/docker/cp-test_multinode-342677-m02_multinode-342677.txt         |                  |         |         |                     |                     |
	| ssh     | multinode-342677 ssh -n                                                                 | multinode-342677 | jenkins | v1.33.1 | 05 Aug 24 23:41 UTC | 05 Aug 24 23:41 UTC |
	|         | multinode-342677-m02 sudo cat                                                           |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |         |                     |                     |
	| ssh     | multinode-342677 ssh -n multinode-342677 sudo cat                                       | multinode-342677 | jenkins | v1.33.1 | 05 Aug 24 23:41 UTC | 05 Aug 24 23:41 UTC |
	|         | /home/docker/cp-test_multinode-342677-m02_multinode-342677.txt                          |                  |         |         |                     |                     |
	| cp      | multinode-342677 cp multinode-342677-m02:/home/docker/cp-test.txt                       | multinode-342677 | jenkins | v1.33.1 | 05 Aug 24 23:41 UTC | 05 Aug 24 23:41 UTC |
	|         | multinode-342677-m03:/home/docker/cp-test_multinode-342677-m02_multinode-342677-m03.txt |                  |         |         |                     |                     |
	| ssh     | multinode-342677 ssh -n                                                                 | multinode-342677 | jenkins | v1.33.1 | 05 Aug 24 23:41 UTC | 05 Aug 24 23:41 UTC |
	|         | multinode-342677-m02 sudo cat                                                           |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |         |                     |                     |
	| ssh     | multinode-342677 ssh -n multinode-342677-m03 sudo cat                                   | multinode-342677 | jenkins | v1.33.1 | 05 Aug 24 23:41 UTC | 05 Aug 24 23:41 UTC |
	|         | /home/docker/cp-test_multinode-342677-m02_multinode-342677-m03.txt                      |                  |         |         |                     |                     |
	| cp      | multinode-342677 cp testdata/cp-test.txt                                                | multinode-342677 | jenkins | v1.33.1 | 05 Aug 24 23:41 UTC | 05 Aug 24 23:41 UTC |
	|         | multinode-342677-m03:/home/docker/cp-test.txt                                           |                  |         |         |                     |                     |
	| ssh     | multinode-342677 ssh -n                                                                 | multinode-342677 | jenkins | v1.33.1 | 05 Aug 24 23:41 UTC | 05 Aug 24 23:41 UTC |
	|         | multinode-342677-m03 sudo cat                                                           |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |         |                     |                     |
	| cp      | multinode-342677 cp multinode-342677-m03:/home/docker/cp-test.txt                       | multinode-342677 | jenkins | v1.33.1 | 05 Aug 24 23:41 UTC | 05 Aug 24 23:41 UTC |
	|         | /tmp/TestMultiNodeserialCopyFile1038504423/001/cp-test_multinode-342677-m03.txt         |                  |         |         |                     |                     |
	| ssh     | multinode-342677 ssh -n                                                                 | multinode-342677 | jenkins | v1.33.1 | 05 Aug 24 23:41 UTC | 05 Aug 24 23:41 UTC |
	|         | multinode-342677-m03 sudo cat                                                           |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |         |                     |                     |
	| cp      | multinode-342677 cp multinode-342677-m03:/home/docker/cp-test.txt                       | multinode-342677 | jenkins | v1.33.1 | 05 Aug 24 23:41 UTC | 05 Aug 24 23:41 UTC |
	|         | multinode-342677:/home/docker/cp-test_multinode-342677-m03_multinode-342677.txt         |                  |         |         |                     |                     |
	| ssh     | multinode-342677 ssh -n                                                                 | multinode-342677 | jenkins | v1.33.1 | 05 Aug 24 23:41 UTC | 05 Aug 24 23:41 UTC |
	|         | multinode-342677-m03 sudo cat                                                           |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |         |                     |                     |
	| ssh     | multinode-342677 ssh -n multinode-342677 sudo cat                                       | multinode-342677 | jenkins | v1.33.1 | 05 Aug 24 23:41 UTC | 05 Aug 24 23:42 UTC |
	|         | /home/docker/cp-test_multinode-342677-m03_multinode-342677.txt                          |                  |         |         |                     |                     |
	| cp      | multinode-342677 cp multinode-342677-m03:/home/docker/cp-test.txt                       | multinode-342677 | jenkins | v1.33.1 | 05 Aug 24 23:42 UTC | 05 Aug 24 23:42 UTC |
	|         | multinode-342677-m02:/home/docker/cp-test_multinode-342677-m03_multinode-342677-m02.txt |                  |         |         |                     |                     |
	| ssh     | multinode-342677 ssh -n                                                                 | multinode-342677 | jenkins | v1.33.1 | 05 Aug 24 23:42 UTC | 05 Aug 24 23:42 UTC |
	|         | multinode-342677-m03 sudo cat                                                           |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |         |                     |                     |
	| ssh     | multinode-342677 ssh -n multinode-342677-m02 sudo cat                                   | multinode-342677 | jenkins | v1.33.1 | 05 Aug 24 23:42 UTC | 05 Aug 24 23:42 UTC |
	|         | /home/docker/cp-test_multinode-342677-m03_multinode-342677-m02.txt                      |                  |         |         |                     |                     |
	| node    | multinode-342677 node stop m03                                                          | multinode-342677 | jenkins | v1.33.1 | 05 Aug 24 23:42 UTC | 05 Aug 24 23:42 UTC |
	| node    | multinode-342677 node start                                                             | multinode-342677 | jenkins | v1.33.1 | 05 Aug 24 23:42 UTC | 05 Aug 24 23:42 UTC |
	|         | m03 -v=7 --alsologtostderr                                                              |                  |         |         |                     |                     |
	| node    | list -p multinode-342677                                                                | multinode-342677 | jenkins | v1.33.1 | 05 Aug 24 23:42 UTC |                     |
	| stop    | -p multinode-342677                                                                     | multinode-342677 | jenkins | v1.33.1 | 05 Aug 24 23:42 UTC |                     |
	| start   | -p multinode-342677                                                                     | multinode-342677 | jenkins | v1.33.1 | 05 Aug 24 23:44 UTC | 05 Aug 24 23:48 UTC |
	|         | --wait=true -v=8                                                                        |                  |         |         |                     |                     |
	|         | --alsologtostderr                                                                       |                  |         |         |                     |                     |
	| node    | list -p multinode-342677                                                                | multinode-342677 | jenkins | v1.33.1 | 05 Aug 24 23:48 UTC |                     |
	| node    | multinode-342677 node delete                                                            | multinode-342677 | jenkins | v1.33.1 | 05 Aug 24 23:48 UTC | 05 Aug 24 23:48 UTC |
	|         | m03                                                                                     |                  |         |         |                     |                     |
	| stop    | multinode-342677 stop                                                                   | multinode-342677 | jenkins | v1.33.1 | 05 Aug 24 23:48 UTC |                     |
	|---------|-----------------------------------------------------------------------------------------|------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/08/05 23:44:43
	Running on machine: ubuntu-20-agent-10
	Binary: Built with gc go1.22.5 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0805 23:44:43.732401   47941 out.go:291] Setting OutFile to fd 1 ...
	I0805 23:44:43.732517   47941 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0805 23:44:43.732525   47941 out.go:304] Setting ErrFile to fd 2...
	I0805 23:44:43.732529   47941 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0805 23:44:43.732699   47941 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19373-9606/.minikube/bin
	I0805 23:44:43.733218   47941 out.go:298] Setting JSON to false
	I0805 23:44:43.734095   47941 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-10","uptime":5230,"bootTime":1722896254,"procs":184,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1065-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0805 23:44:43.734148   47941 start.go:139] virtualization: kvm guest
	I0805 23:44:43.737197   47941 out.go:177] * [multinode-342677] minikube v1.33.1 on Ubuntu 20.04 (kvm/amd64)
	I0805 23:44:43.738673   47941 notify.go:220] Checking for updates...
	I0805 23:44:43.738686   47941 out.go:177]   - MINIKUBE_LOCATION=19373
	I0805 23:44:43.740269   47941 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0805 23:44:43.741803   47941 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19373-9606/kubeconfig
	I0805 23:44:43.743344   47941 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19373-9606/.minikube
	I0805 23:44:43.744742   47941 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0805 23:44:43.746273   47941 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0805 23:44:43.748007   47941 config.go:182] Loaded profile config "multinode-342677": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0805 23:44:43.748104   47941 driver.go:392] Setting default libvirt URI to qemu:///system
	I0805 23:44:43.748463   47941 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0805 23:44:43.748512   47941 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0805 23:44:43.764312   47941 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39423
	I0805 23:44:43.764761   47941 main.go:141] libmachine: () Calling .GetVersion
	I0805 23:44:43.765280   47941 main.go:141] libmachine: Using API Version  1
	I0805 23:44:43.765297   47941 main.go:141] libmachine: () Calling .SetConfigRaw
	I0805 23:44:43.765586   47941 main.go:141] libmachine: () Calling .GetMachineName
	I0805 23:44:43.765773   47941 main.go:141] libmachine: (multinode-342677) Calling .DriverName
	I0805 23:44:43.803461   47941 out.go:177] * Using the kvm2 driver based on existing profile
	I0805 23:44:43.804752   47941 start.go:297] selected driver: kvm2
	I0805 23:44:43.804776   47941 start.go:901] validating driver "kvm2" against &{Name:multinode-342677 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19339/minikube-v1.33.1-1722248113-19339-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kubern
etesVersion:v1.30.3 ClusterName:multinode-342677 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.10 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.89 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:false Worker:true} {Name:m03 IP:192.168.39.75 Port:0 KubernetesVersion:v1.30.3 ContainerRuntime: ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingres
s-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirr
or: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0805 23:44:43.804920   47941 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0805 23:44:43.805266   47941 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0805 23:44:43.805347   47941 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/19373-9606/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0805 23:44:43.820369   47941 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.33.1
	I0805 23:44:43.821188   47941 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0805 23:44:43.821264   47941 cni.go:84] Creating CNI manager for ""
	I0805 23:44:43.821279   47941 cni.go:136] multinode detected (3 nodes found), recommending kindnet
	I0805 23:44:43.821340   47941 start.go:340] cluster config:
	{Name:multinode-342677 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19339/minikube-v1.33.1-1722248113-19339-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.3 ClusterName:multinode-342677 Namespace:default APIServerHA
VIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.10 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.89 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:false Worker:true} {Name:m03 IP:192.168.39.75 Port:0 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kon
g:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath
: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0805 23:44:43.821465   47941 iso.go:125] acquiring lock: {Name:mk54a637ed625e04bb2b6adf973b61c976cd6d35 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0805 23:44:43.823213   47941 out.go:177] * Starting "multinode-342677" primary control-plane node in "multinode-342677" cluster
	I0805 23:44:43.825018   47941 preload.go:131] Checking if preload exists for k8s version v1.30.3 and runtime crio
	I0805 23:44:43.825059   47941 preload.go:146] Found local preload: /home/jenkins/minikube-integration/19373-9606/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-cri-o-overlay-amd64.tar.lz4
	I0805 23:44:43.825073   47941 cache.go:56] Caching tarball of preloaded images
	I0805 23:44:43.825195   47941 preload.go:172] Found /home/jenkins/minikube-integration/19373-9606/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0805 23:44:43.825207   47941 cache.go:59] Finished verifying existence of preloaded tar for v1.30.3 on crio
	I0805 23:44:43.825357   47941 profile.go:143] Saving config to /home/jenkins/minikube-integration/19373-9606/.minikube/profiles/multinode-342677/config.json ...
	I0805 23:44:43.825571   47941 start.go:360] acquireMachinesLock for multinode-342677: {Name:mkd2ba511c39504598222edbf83078b718329186 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0805 23:44:43.825628   47941 start.go:364] duration metric: took 33.872µs to acquireMachinesLock for "multinode-342677"
	I0805 23:44:43.825647   47941 start.go:96] Skipping create...Using existing machine configuration
	I0805 23:44:43.825656   47941 fix.go:54] fixHost starting: 
	I0805 23:44:43.825912   47941 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0805 23:44:43.825947   47941 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0805 23:44:43.839700   47941 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36433
	I0805 23:44:43.840074   47941 main.go:141] libmachine: () Calling .GetVersion
	I0805 23:44:43.840544   47941 main.go:141] libmachine: Using API Version  1
	I0805 23:44:43.840565   47941 main.go:141] libmachine: () Calling .SetConfigRaw
	I0805 23:44:43.840923   47941 main.go:141] libmachine: () Calling .GetMachineName
	I0805 23:44:43.841136   47941 main.go:141] libmachine: (multinode-342677) Calling .DriverName
	I0805 23:44:43.841288   47941 main.go:141] libmachine: (multinode-342677) Calling .GetState
	I0805 23:44:43.843192   47941 fix.go:112] recreateIfNeeded on multinode-342677: state=Running err=<nil>
	W0805 23:44:43.843209   47941 fix.go:138] unexpected machine state, will restart: <nil>
	I0805 23:44:43.845395   47941 out.go:177] * Updating the running kvm2 "multinode-342677" VM ...
	I0805 23:44:43.846838   47941 machine.go:94] provisionDockerMachine start ...
	I0805 23:44:43.846860   47941 main.go:141] libmachine: (multinode-342677) Calling .DriverName
	I0805 23:44:43.847163   47941 main.go:141] libmachine: (multinode-342677) Calling .GetSSHHostname
	I0805 23:44:43.849681   47941 main.go:141] libmachine: (multinode-342677) DBG | domain multinode-342677 has defined MAC address 52:54:00:90:94:1a in network mk-multinode-342677
	I0805 23:44:43.850237   47941 main.go:141] libmachine: (multinode-342677) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:90:94:1a", ip: ""} in network mk-multinode-342677: {Iface:virbr1 ExpiryTime:2024-08-06 00:39:05 +0000 UTC Type:0 Mac:52:54:00:90:94:1a Iaid: IPaddr:192.168.39.10 Prefix:24 Hostname:multinode-342677 Clientid:01:52:54:00:90:94:1a}
	I0805 23:44:43.850275   47941 main.go:141] libmachine: (multinode-342677) DBG | domain multinode-342677 has defined IP address 192.168.39.10 and MAC address 52:54:00:90:94:1a in network mk-multinode-342677
	I0805 23:44:43.850410   47941 main.go:141] libmachine: (multinode-342677) Calling .GetSSHPort
	I0805 23:44:43.850596   47941 main.go:141] libmachine: (multinode-342677) Calling .GetSSHKeyPath
	I0805 23:44:43.850743   47941 main.go:141] libmachine: (multinode-342677) Calling .GetSSHKeyPath
	I0805 23:44:43.851009   47941 main.go:141] libmachine: (multinode-342677) Calling .GetSSHUsername
	I0805 23:44:43.851248   47941 main.go:141] libmachine: Using SSH client type: native
	I0805 23:44:43.851461   47941 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.10 22 <nil> <nil>}
	I0805 23:44:43.851475   47941 main.go:141] libmachine: About to run SSH command:
	hostname
	I0805 23:44:43.968076   47941 main.go:141] libmachine: SSH cmd err, output: <nil>: multinode-342677
	
	I0805 23:44:43.968110   47941 main.go:141] libmachine: (multinode-342677) Calling .GetMachineName
	I0805 23:44:43.968360   47941 buildroot.go:166] provisioning hostname "multinode-342677"
	I0805 23:44:43.968378   47941 main.go:141] libmachine: (multinode-342677) Calling .GetMachineName
	I0805 23:44:43.968574   47941 main.go:141] libmachine: (multinode-342677) Calling .GetSSHHostname
	I0805 23:44:43.971403   47941 main.go:141] libmachine: (multinode-342677) DBG | domain multinode-342677 has defined MAC address 52:54:00:90:94:1a in network mk-multinode-342677
	I0805 23:44:43.971733   47941 main.go:141] libmachine: (multinode-342677) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:90:94:1a", ip: ""} in network mk-multinode-342677: {Iface:virbr1 ExpiryTime:2024-08-06 00:39:05 +0000 UTC Type:0 Mac:52:54:00:90:94:1a Iaid: IPaddr:192.168.39.10 Prefix:24 Hostname:multinode-342677 Clientid:01:52:54:00:90:94:1a}
	I0805 23:44:43.971757   47941 main.go:141] libmachine: (multinode-342677) DBG | domain multinode-342677 has defined IP address 192.168.39.10 and MAC address 52:54:00:90:94:1a in network mk-multinode-342677
	I0805 23:44:43.971887   47941 main.go:141] libmachine: (multinode-342677) Calling .GetSSHPort
	I0805 23:44:43.972051   47941 main.go:141] libmachine: (multinode-342677) Calling .GetSSHKeyPath
	I0805 23:44:43.972207   47941 main.go:141] libmachine: (multinode-342677) Calling .GetSSHKeyPath
	I0805 23:44:43.972307   47941 main.go:141] libmachine: (multinode-342677) Calling .GetSSHUsername
	I0805 23:44:43.972478   47941 main.go:141] libmachine: Using SSH client type: native
	I0805 23:44:43.972645   47941 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.10 22 <nil> <nil>}
	I0805 23:44:43.972658   47941 main.go:141] libmachine: About to run SSH command:
	sudo hostname multinode-342677 && echo "multinode-342677" | sudo tee /etc/hostname
	I0805 23:44:44.116141   47941 main.go:141] libmachine: SSH cmd err, output: <nil>: multinode-342677
	
	I0805 23:44:44.116166   47941 main.go:141] libmachine: (multinode-342677) Calling .GetSSHHostname
	I0805 23:44:44.119130   47941 main.go:141] libmachine: (multinode-342677) DBG | domain multinode-342677 has defined MAC address 52:54:00:90:94:1a in network mk-multinode-342677
	I0805 23:44:44.119520   47941 main.go:141] libmachine: (multinode-342677) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:90:94:1a", ip: ""} in network mk-multinode-342677: {Iface:virbr1 ExpiryTime:2024-08-06 00:39:05 +0000 UTC Type:0 Mac:52:54:00:90:94:1a Iaid: IPaddr:192.168.39.10 Prefix:24 Hostname:multinode-342677 Clientid:01:52:54:00:90:94:1a}
	I0805 23:44:44.119550   47941 main.go:141] libmachine: (multinode-342677) DBG | domain multinode-342677 has defined IP address 192.168.39.10 and MAC address 52:54:00:90:94:1a in network mk-multinode-342677
	I0805 23:44:44.119778   47941 main.go:141] libmachine: (multinode-342677) Calling .GetSSHPort
	I0805 23:44:44.119995   47941 main.go:141] libmachine: (multinode-342677) Calling .GetSSHKeyPath
	I0805 23:44:44.120163   47941 main.go:141] libmachine: (multinode-342677) Calling .GetSSHKeyPath
	I0805 23:44:44.120323   47941 main.go:141] libmachine: (multinode-342677) Calling .GetSSHUsername
	I0805 23:44:44.120508   47941 main.go:141] libmachine: Using SSH client type: native
	I0805 23:44:44.120727   47941 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.10 22 <nil> <nil>}
	I0805 23:44:44.120745   47941 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\smultinode-342677' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 multinode-342677/g' /etc/hosts;
				else 
					echo '127.0.1.1 multinode-342677' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0805 23:44:44.236051   47941 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0805 23:44:44.236094   47941 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19373-9606/.minikube CaCertPath:/home/jenkins/minikube-integration/19373-9606/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19373-9606/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19373-9606/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19373-9606/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19373-9606/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19373-9606/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19373-9606/.minikube}
	I0805 23:44:44.236134   47941 buildroot.go:174] setting up certificates
	I0805 23:44:44.236142   47941 provision.go:84] configureAuth start
	I0805 23:44:44.236152   47941 main.go:141] libmachine: (multinode-342677) Calling .GetMachineName
	I0805 23:44:44.236418   47941 main.go:141] libmachine: (multinode-342677) Calling .GetIP
	I0805 23:44:44.239064   47941 main.go:141] libmachine: (multinode-342677) DBG | domain multinode-342677 has defined MAC address 52:54:00:90:94:1a in network mk-multinode-342677
	I0805 23:44:44.239413   47941 main.go:141] libmachine: (multinode-342677) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:90:94:1a", ip: ""} in network mk-multinode-342677: {Iface:virbr1 ExpiryTime:2024-08-06 00:39:05 +0000 UTC Type:0 Mac:52:54:00:90:94:1a Iaid: IPaddr:192.168.39.10 Prefix:24 Hostname:multinode-342677 Clientid:01:52:54:00:90:94:1a}
	I0805 23:44:44.239438   47941 main.go:141] libmachine: (multinode-342677) DBG | domain multinode-342677 has defined IP address 192.168.39.10 and MAC address 52:54:00:90:94:1a in network mk-multinode-342677
	I0805 23:44:44.239627   47941 main.go:141] libmachine: (multinode-342677) Calling .GetSSHHostname
	I0805 23:44:44.242249   47941 main.go:141] libmachine: (multinode-342677) DBG | domain multinode-342677 has defined MAC address 52:54:00:90:94:1a in network mk-multinode-342677
	I0805 23:44:44.242732   47941 main.go:141] libmachine: (multinode-342677) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:90:94:1a", ip: ""} in network mk-multinode-342677: {Iface:virbr1 ExpiryTime:2024-08-06 00:39:05 +0000 UTC Type:0 Mac:52:54:00:90:94:1a Iaid: IPaddr:192.168.39.10 Prefix:24 Hostname:multinode-342677 Clientid:01:52:54:00:90:94:1a}
	I0805 23:44:44.242772   47941 main.go:141] libmachine: (multinode-342677) DBG | domain multinode-342677 has defined IP address 192.168.39.10 and MAC address 52:54:00:90:94:1a in network mk-multinode-342677
	I0805 23:44:44.242891   47941 provision.go:143] copyHostCerts
	I0805 23:44:44.242931   47941 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19373-9606/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/19373-9606/.minikube/ca.pem
	I0805 23:44:44.242980   47941 exec_runner.go:144] found /home/jenkins/minikube-integration/19373-9606/.minikube/ca.pem, removing ...
	I0805 23:44:44.242994   47941 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19373-9606/.minikube/ca.pem
	I0805 23:44:44.243115   47941 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19373-9606/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19373-9606/.minikube/ca.pem (1082 bytes)
	I0805 23:44:44.243226   47941 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19373-9606/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/19373-9606/.minikube/cert.pem
	I0805 23:44:44.243250   47941 exec_runner.go:144] found /home/jenkins/minikube-integration/19373-9606/.minikube/cert.pem, removing ...
	I0805 23:44:44.243257   47941 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19373-9606/.minikube/cert.pem
	I0805 23:44:44.243303   47941 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19373-9606/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19373-9606/.minikube/cert.pem (1123 bytes)
	I0805 23:44:44.243399   47941 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19373-9606/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/19373-9606/.minikube/key.pem
	I0805 23:44:44.243422   47941 exec_runner.go:144] found /home/jenkins/minikube-integration/19373-9606/.minikube/key.pem, removing ...
	I0805 23:44:44.243432   47941 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19373-9606/.minikube/key.pem
	I0805 23:44:44.243473   47941 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19373-9606/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19373-9606/.minikube/key.pem (1679 bytes)
	I0805 23:44:44.243553   47941 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19373-9606/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19373-9606/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19373-9606/.minikube/certs/ca-key.pem org=jenkins.multinode-342677 san=[127.0.0.1 192.168.39.10 localhost minikube multinode-342677]
	I0805 23:44:44.492597   47941 provision.go:177] copyRemoteCerts
	I0805 23:44:44.492669   47941 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0805 23:44:44.492696   47941 main.go:141] libmachine: (multinode-342677) Calling .GetSSHHostname
	I0805 23:44:44.495380   47941 main.go:141] libmachine: (multinode-342677) DBG | domain multinode-342677 has defined MAC address 52:54:00:90:94:1a in network mk-multinode-342677
	I0805 23:44:44.495750   47941 main.go:141] libmachine: (multinode-342677) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:90:94:1a", ip: ""} in network mk-multinode-342677: {Iface:virbr1 ExpiryTime:2024-08-06 00:39:05 +0000 UTC Type:0 Mac:52:54:00:90:94:1a Iaid: IPaddr:192.168.39.10 Prefix:24 Hostname:multinode-342677 Clientid:01:52:54:00:90:94:1a}
	I0805 23:44:44.495771   47941 main.go:141] libmachine: (multinode-342677) DBG | domain multinode-342677 has defined IP address 192.168.39.10 and MAC address 52:54:00:90:94:1a in network mk-multinode-342677
	I0805 23:44:44.495988   47941 main.go:141] libmachine: (multinode-342677) Calling .GetSSHPort
	I0805 23:44:44.496214   47941 main.go:141] libmachine: (multinode-342677) Calling .GetSSHKeyPath
	I0805 23:44:44.496388   47941 main.go:141] libmachine: (multinode-342677) Calling .GetSSHUsername
	I0805 23:44:44.496495   47941 sshutil.go:53] new ssh client: &{IP:192.168.39.10 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19373-9606/.minikube/machines/multinode-342677/id_rsa Username:docker}
	I0805 23:44:44.585516   47941 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19373-9606/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0805 23:44:44.585604   47941 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19373-9606/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0805 23:44:44.614439   47941 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19373-9606/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0805 23:44:44.614516   47941 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19373-9606/.minikube/machines/server.pem --> /etc/docker/server.pem (1216 bytes)
	I0805 23:44:44.640074   47941 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19373-9606/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0805 23:44:44.640156   47941 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19373-9606/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0805 23:44:44.665760   47941 provision.go:87] duration metric: took 429.607552ms to configureAuth
	I0805 23:44:44.665790   47941 buildroot.go:189] setting minikube options for container-runtime
	I0805 23:44:44.666036   47941 config.go:182] Loaded profile config "multinode-342677": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0805 23:44:44.666108   47941 main.go:141] libmachine: (multinode-342677) Calling .GetSSHHostname
	I0805 23:44:44.668519   47941 main.go:141] libmachine: (multinode-342677) DBG | domain multinode-342677 has defined MAC address 52:54:00:90:94:1a in network mk-multinode-342677
	I0805 23:44:44.668845   47941 main.go:141] libmachine: (multinode-342677) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:90:94:1a", ip: ""} in network mk-multinode-342677: {Iface:virbr1 ExpiryTime:2024-08-06 00:39:05 +0000 UTC Type:0 Mac:52:54:00:90:94:1a Iaid: IPaddr:192.168.39.10 Prefix:24 Hostname:multinode-342677 Clientid:01:52:54:00:90:94:1a}
	I0805 23:44:44.668869   47941 main.go:141] libmachine: (multinode-342677) DBG | domain multinode-342677 has defined IP address 192.168.39.10 and MAC address 52:54:00:90:94:1a in network mk-multinode-342677
	I0805 23:44:44.669018   47941 main.go:141] libmachine: (multinode-342677) Calling .GetSSHPort
	I0805 23:44:44.669191   47941 main.go:141] libmachine: (multinode-342677) Calling .GetSSHKeyPath
	I0805 23:44:44.669361   47941 main.go:141] libmachine: (multinode-342677) Calling .GetSSHKeyPath
	I0805 23:44:44.669502   47941 main.go:141] libmachine: (multinode-342677) Calling .GetSSHUsername
	I0805 23:44:44.669675   47941 main.go:141] libmachine: Using SSH client type: native
	I0805 23:44:44.669874   47941 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.10 22 <nil> <nil>}
	I0805 23:44:44.669894   47941 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0805 23:46:15.356788   47941 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0805 23:46:15.356817   47941 machine.go:97] duration metric: took 1m31.509963334s to provisionDockerMachine
	I0805 23:46:15.356832   47941 start.go:293] postStartSetup for "multinode-342677" (driver="kvm2")
	I0805 23:46:15.356845   47941 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0805 23:46:15.356864   47941 main.go:141] libmachine: (multinode-342677) Calling .DriverName
	I0805 23:46:15.357171   47941 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0805 23:46:15.357207   47941 main.go:141] libmachine: (multinode-342677) Calling .GetSSHHostname
	I0805 23:46:15.360715   47941 main.go:141] libmachine: (multinode-342677) DBG | domain multinode-342677 has defined MAC address 52:54:00:90:94:1a in network mk-multinode-342677
	I0805 23:46:15.361255   47941 main.go:141] libmachine: (multinode-342677) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:90:94:1a", ip: ""} in network mk-multinode-342677: {Iface:virbr1 ExpiryTime:2024-08-06 00:39:05 +0000 UTC Type:0 Mac:52:54:00:90:94:1a Iaid: IPaddr:192.168.39.10 Prefix:24 Hostname:multinode-342677 Clientid:01:52:54:00:90:94:1a}
	I0805 23:46:15.361288   47941 main.go:141] libmachine: (multinode-342677) DBG | domain multinode-342677 has defined IP address 192.168.39.10 and MAC address 52:54:00:90:94:1a in network mk-multinode-342677
	I0805 23:46:15.361446   47941 main.go:141] libmachine: (multinode-342677) Calling .GetSSHPort
	I0805 23:46:15.361654   47941 main.go:141] libmachine: (multinode-342677) Calling .GetSSHKeyPath
	I0805 23:46:15.361830   47941 main.go:141] libmachine: (multinode-342677) Calling .GetSSHUsername
	I0805 23:46:15.361979   47941 sshutil.go:53] new ssh client: &{IP:192.168.39.10 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19373-9606/.minikube/machines/multinode-342677/id_rsa Username:docker}
	I0805 23:46:15.450587   47941 ssh_runner.go:195] Run: cat /etc/os-release
	I0805 23:46:15.455090   47941 command_runner.go:130] > NAME=Buildroot
	I0805 23:46:15.455114   47941 command_runner.go:130] > VERSION=2023.02.9-dirty
	I0805 23:46:15.455122   47941 command_runner.go:130] > ID=buildroot
	I0805 23:46:15.455130   47941 command_runner.go:130] > VERSION_ID=2023.02.9
	I0805 23:46:15.455144   47941 command_runner.go:130] > PRETTY_NAME="Buildroot 2023.02.9"
	I0805 23:46:15.455425   47941 info.go:137] Remote host: Buildroot 2023.02.9
	I0805 23:46:15.455450   47941 filesync.go:126] Scanning /home/jenkins/minikube-integration/19373-9606/.minikube/addons for local assets ...
	I0805 23:46:15.455518   47941 filesync.go:126] Scanning /home/jenkins/minikube-integration/19373-9606/.minikube/files for local assets ...
	I0805 23:46:15.455626   47941 filesync.go:149] local asset: /home/jenkins/minikube-integration/19373-9606/.minikube/files/etc/ssl/certs/167922.pem -> 167922.pem in /etc/ssl/certs
	I0805 23:46:15.455639   47941 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19373-9606/.minikube/files/etc/ssl/certs/167922.pem -> /etc/ssl/certs/167922.pem
	I0805 23:46:15.455746   47941 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0805 23:46:15.465622   47941 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19373-9606/.minikube/files/etc/ssl/certs/167922.pem --> /etc/ssl/certs/167922.pem (1708 bytes)
	I0805 23:46:15.490589   47941 start.go:296] duration metric: took 133.742358ms for postStartSetup
	I0805 23:46:15.490637   47941 fix.go:56] duration metric: took 1m31.664980969s for fixHost
	I0805 23:46:15.490660   47941 main.go:141] libmachine: (multinode-342677) Calling .GetSSHHostname
	I0805 23:46:15.493250   47941 main.go:141] libmachine: (multinode-342677) DBG | domain multinode-342677 has defined MAC address 52:54:00:90:94:1a in network mk-multinode-342677
	I0805 23:46:15.493616   47941 main.go:141] libmachine: (multinode-342677) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:90:94:1a", ip: ""} in network mk-multinode-342677: {Iface:virbr1 ExpiryTime:2024-08-06 00:39:05 +0000 UTC Type:0 Mac:52:54:00:90:94:1a Iaid: IPaddr:192.168.39.10 Prefix:24 Hostname:multinode-342677 Clientid:01:52:54:00:90:94:1a}
	I0805 23:46:15.493646   47941 main.go:141] libmachine: (multinode-342677) DBG | domain multinode-342677 has defined IP address 192.168.39.10 and MAC address 52:54:00:90:94:1a in network mk-multinode-342677
	I0805 23:46:15.493780   47941 main.go:141] libmachine: (multinode-342677) Calling .GetSSHPort
	I0805 23:46:15.493986   47941 main.go:141] libmachine: (multinode-342677) Calling .GetSSHKeyPath
	I0805 23:46:15.494156   47941 main.go:141] libmachine: (multinode-342677) Calling .GetSSHKeyPath
	I0805 23:46:15.494262   47941 main.go:141] libmachine: (multinode-342677) Calling .GetSSHUsername
	I0805 23:46:15.494392   47941 main.go:141] libmachine: Using SSH client type: native
	I0805 23:46:15.494560   47941 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.10 22 <nil> <nil>}
	I0805 23:46:15.494572   47941 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0805 23:46:15.608152   47941 main.go:141] libmachine: SSH cmd err, output: <nil>: 1722901575.590121821
	
	I0805 23:46:15.608179   47941 fix.go:216] guest clock: 1722901575.590121821
	I0805 23:46:15.608189   47941 fix.go:229] Guest: 2024-08-05 23:46:15.590121821 +0000 UTC Remote: 2024-08-05 23:46:15.490642413 +0000 UTC m=+91.794317759 (delta=99.479408ms)
	I0805 23:46:15.608229   47941 fix.go:200] guest clock delta is within tolerance: 99.479408ms
	I0805 23:46:15.608240   47941 start.go:83] releasing machines lock for "multinode-342677", held for 1m31.782599766s
	I0805 23:46:15.608263   47941 main.go:141] libmachine: (multinode-342677) Calling .DriverName
	I0805 23:46:15.608521   47941 main.go:141] libmachine: (multinode-342677) Calling .GetIP
	I0805 23:46:15.611183   47941 main.go:141] libmachine: (multinode-342677) DBG | domain multinode-342677 has defined MAC address 52:54:00:90:94:1a in network mk-multinode-342677
	I0805 23:46:15.611602   47941 main.go:141] libmachine: (multinode-342677) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:90:94:1a", ip: ""} in network mk-multinode-342677: {Iface:virbr1 ExpiryTime:2024-08-06 00:39:05 +0000 UTC Type:0 Mac:52:54:00:90:94:1a Iaid: IPaddr:192.168.39.10 Prefix:24 Hostname:multinode-342677 Clientid:01:52:54:00:90:94:1a}
	I0805 23:46:15.611633   47941 main.go:141] libmachine: (multinode-342677) DBG | domain multinode-342677 has defined IP address 192.168.39.10 and MAC address 52:54:00:90:94:1a in network mk-multinode-342677
	I0805 23:46:15.611829   47941 main.go:141] libmachine: (multinode-342677) Calling .DriverName
	I0805 23:46:15.612479   47941 main.go:141] libmachine: (multinode-342677) Calling .DriverName
	I0805 23:46:15.612680   47941 main.go:141] libmachine: (multinode-342677) Calling .DriverName
	I0805 23:46:15.612764   47941 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0805 23:46:15.612821   47941 main.go:141] libmachine: (multinode-342677) Calling .GetSSHHostname
	I0805 23:46:15.612917   47941 ssh_runner.go:195] Run: cat /version.json
	I0805 23:46:15.612943   47941 main.go:141] libmachine: (multinode-342677) Calling .GetSSHHostname
	I0805 23:46:15.615515   47941 main.go:141] libmachine: (multinode-342677) DBG | domain multinode-342677 has defined MAC address 52:54:00:90:94:1a in network mk-multinode-342677
	I0805 23:46:15.615789   47941 main.go:141] libmachine: (multinode-342677) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:90:94:1a", ip: ""} in network mk-multinode-342677: {Iface:virbr1 ExpiryTime:2024-08-06 00:39:05 +0000 UTC Type:0 Mac:52:54:00:90:94:1a Iaid: IPaddr:192.168.39.10 Prefix:24 Hostname:multinode-342677 Clientid:01:52:54:00:90:94:1a}
	I0805 23:46:15.615816   47941 main.go:141] libmachine: (multinode-342677) DBG | domain multinode-342677 has defined IP address 192.168.39.10 and MAC address 52:54:00:90:94:1a in network mk-multinode-342677
	I0805 23:46:15.615904   47941 main.go:141] libmachine: (multinode-342677) DBG | domain multinode-342677 has defined MAC address 52:54:00:90:94:1a in network mk-multinode-342677
	I0805 23:46:15.615952   47941 main.go:141] libmachine: (multinode-342677) Calling .GetSSHPort
	I0805 23:46:15.616123   47941 main.go:141] libmachine: (multinode-342677) Calling .GetSSHKeyPath
	I0805 23:46:15.616257   47941 main.go:141] libmachine: (multinode-342677) Calling .GetSSHUsername
	I0805 23:46:15.616325   47941 main.go:141] libmachine: (multinode-342677) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:90:94:1a", ip: ""} in network mk-multinode-342677: {Iface:virbr1 ExpiryTime:2024-08-06 00:39:05 +0000 UTC Type:0 Mac:52:54:00:90:94:1a Iaid: IPaddr:192.168.39.10 Prefix:24 Hostname:multinode-342677 Clientid:01:52:54:00:90:94:1a}
	I0805 23:46:15.616349   47941 main.go:141] libmachine: (multinode-342677) DBG | domain multinode-342677 has defined IP address 192.168.39.10 and MAC address 52:54:00:90:94:1a in network mk-multinode-342677
	I0805 23:46:15.616407   47941 sshutil.go:53] new ssh client: &{IP:192.168.39.10 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19373-9606/.minikube/machines/multinode-342677/id_rsa Username:docker}
	I0805 23:46:15.616527   47941 main.go:141] libmachine: (multinode-342677) Calling .GetSSHPort
	I0805 23:46:15.616677   47941 main.go:141] libmachine: (multinode-342677) Calling .GetSSHKeyPath
	I0805 23:46:15.616796   47941 main.go:141] libmachine: (multinode-342677) Calling .GetSSHUsername
	I0805 23:46:15.616939   47941 sshutil.go:53] new ssh client: &{IP:192.168.39.10 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19373-9606/.minikube/machines/multinode-342677/id_rsa Username:docker}
	I0805 23:46:15.714535   47941 command_runner.go:130] > <a href="https://github.com/kubernetes/registry.k8s.io">Temporary Redirect</a>.
	I0805 23:46:15.715316   47941 command_runner.go:130] > {"iso_version": "v1.33.1-1722248113-19339", "kicbase_version": "v0.0.44-1721902582-19326", "minikube_version": "v1.33.1", "commit": "b8389556a97747a5bbaa1906d238251ad536d76e"}
	I0805 23:46:15.715470   47941 ssh_runner.go:195] Run: systemctl --version
	I0805 23:46:15.721764   47941 command_runner.go:130] > systemd 252 (252)
	I0805 23:46:15.721823   47941 command_runner.go:130] > -PAM -AUDIT -SELINUX -APPARMOR -IMA -SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL +ACL +BLKID +CURL -ELFUTILS -FIDO2 -IDN2 -IDN +IPTC +KMOD -LIBCRYPTSETUP +LIBFDISK -PCRE2 -PWQUALITY -P11KIT -QRENCODE -TPM2 -BZIP2 +LZ4 +XZ +ZLIB -ZSTD -BPF_FRAMEWORK -XKBCOMMON -UTMP -SYSVINIT default-hierarchy=unified
	I0805 23:46:15.721895   47941 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0805 23:46:15.891119   47941 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0805 23:46:15.899097   47941 command_runner.go:130] ! stat: cannot statx '/etc/cni/net.d/*loopback.conf*': No such file or directory
	W0805 23:46:15.899479   47941 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0805 23:46:15.899547   47941 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0805 23:46:15.910259   47941 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I0805 23:46:15.910289   47941 start.go:495] detecting cgroup driver to use...
	I0805 23:46:15.910365   47941 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0805 23:46:15.927314   47941 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0805 23:46:15.942300   47941 docker.go:217] disabling cri-docker service (if available) ...
	I0805 23:46:15.942355   47941 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0805 23:46:15.955916   47941 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0805 23:46:15.969657   47941 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0805 23:46:16.110031   47941 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0805 23:46:16.259973   47941 docker.go:233] disabling docker service ...
	I0805 23:46:16.260053   47941 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0805 23:46:16.280842   47941 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0805 23:46:16.295418   47941 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0805 23:46:16.451997   47941 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0805 23:46:16.612750   47941 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0805 23:46:16.627787   47941 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0805 23:46:16.647576   47941 command_runner.go:130] > runtime-endpoint: unix:///var/run/crio/crio.sock
	I0805 23:46:16.648113   47941 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I0805 23:46:16.648185   47941 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0805 23:46:16.659491   47941 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0805 23:46:16.659581   47941 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0805 23:46:16.670628   47941 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0805 23:46:16.682194   47941 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0805 23:46:16.693080   47941 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0805 23:46:16.704700   47941 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0805 23:46:16.716395   47941 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0805 23:46:16.727662   47941 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0805 23:46:16.738079   47941 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0805 23:46:16.748597   47941 command_runner.go:130] > net.bridge.bridge-nf-call-iptables = 1
	I0805 23:46:16.748674   47941 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0805 23:46:16.758318   47941 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0805 23:46:16.894240   47941 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0805 23:46:24.560307   47941 ssh_runner.go:235] Completed: sudo systemctl restart crio: (7.66602585s)
	I0805 23:46:24.560338   47941 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0805 23:46:24.560390   47941 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0805 23:46:24.565642   47941 command_runner.go:130] >   File: /var/run/crio/crio.sock
	I0805 23:46:24.565665   47941 command_runner.go:130] >   Size: 0         	Blocks: 0          IO Block: 4096   socket
	I0805 23:46:24.565680   47941 command_runner.go:130] > Device: 0,22	Inode: 1345        Links: 1
	I0805 23:46:24.565692   47941 command_runner.go:130] > Access: (0660/srw-rw----)  Uid: (    0/    root)   Gid: (    0/    root)
	I0805 23:46:24.565704   47941 command_runner.go:130] > Access: 2024-08-05 23:46:24.431503312 +0000
	I0805 23:46:24.565718   47941 command_runner.go:130] > Modify: 2024-08-05 23:46:24.431503312 +0000
	I0805 23:46:24.565729   47941 command_runner.go:130] > Change: 2024-08-05 23:46:24.431503312 +0000
	I0805 23:46:24.565736   47941 command_runner.go:130] >  Birth: -
	I0805 23:46:24.565964   47941 start.go:563] Will wait 60s for crictl version
	I0805 23:46:24.566014   47941 ssh_runner.go:195] Run: which crictl
	I0805 23:46:24.569764   47941 command_runner.go:130] > /usr/bin/crictl
	I0805 23:46:24.569908   47941 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0805 23:46:24.610487   47941 command_runner.go:130] > Version:  0.1.0
	I0805 23:46:24.610513   47941 command_runner.go:130] > RuntimeName:  cri-o
	I0805 23:46:24.610520   47941 command_runner.go:130] > RuntimeVersion:  1.29.1
	I0805 23:46:24.610527   47941 command_runner.go:130] > RuntimeApiVersion:  v1
	I0805 23:46:24.610563   47941 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0805 23:46:24.610627   47941 ssh_runner.go:195] Run: crio --version
	I0805 23:46:24.641066   47941 command_runner.go:130] > crio version 1.29.1
	I0805 23:46:24.641089   47941 command_runner.go:130] > Version:        1.29.1
	I0805 23:46:24.641096   47941 command_runner.go:130] > GitCommit:      unknown
	I0805 23:46:24.641102   47941 command_runner.go:130] > GitCommitDate:  unknown
	I0805 23:46:24.641108   47941 command_runner.go:130] > GitTreeState:   clean
	I0805 23:46:24.641115   47941 command_runner.go:130] > BuildDate:      2024-07-29T16:04:01Z
	I0805 23:46:24.641121   47941 command_runner.go:130] > GoVersion:      go1.21.6
	I0805 23:46:24.641127   47941 command_runner.go:130] > Compiler:       gc
	I0805 23:46:24.641132   47941 command_runner.go:130] > Platform:       linux/amd64
	I0805 23:46:24.641143   47941 command_runner.go:130] > Linkmode:       dynamic
	I0805 23:46:24.641150   47941 command_runner.go:130] > BuildTags:      
	I0805 23:46:24.641156   47941 command_runner.go:130] >   containers_image_ostree_stub
	I0805 23:46:24.641163   47941 command_runner.go:130] >   exclude_graphdriver_btrfs
	I0805 23:46:24.641169   47941 command_runner.go:130] >   btrfs_noversion
	I0805 23:46:24.641182   47941 command_runner.go:130] >   exclude_graphdriver_devicemapper
	I0805 23:46:24.641188   47941 command_runner.go:130] >   libdm_no_deferred_remove
	I0805 23:46:24.641194   47941 command_runner.go:130] >   seccomp
	I0805 23:46:24.641204   47941 command_runner.go:130] > LDFlags:          unknown
	I0805 23:46:24.641225   47941 command_runner.go:130] > SeccompEnabled:   true
	I0805 23:46:24.641235   47941 command_runner.go:130] > AppArmorEnabled:  false
	I0805 23:46:24.641309   47941 ssh_runner.go:195] Run: crio --version
	I0805 23:46:24.669500   47941 command_runner.go:130] > crio version 1.29.1
	I0805 23:46:24.669527   47941 command_runner.go:130] > Version:        1.29.1
	I0805 23:46:24.669532   47941 command_runner.go:130] > GitCommit:      unknown
	I0805 23:46:24.669537   47941 command_runner.go:130] > GitCommitDate:  unknown
	I0805 23:46:24.669541   47941 command_runner.go:130] > GitTreeState:   clean
	I0805 23:46:24.669546   47941 command_runner.go:130] > BuildDate:      2024-07-29T16:04:01Z
	I0805 23:46:24.669550   47941 command_runner.go:130] > GoVersion:      go1.21.6
	I0805 23:46:24.669553   47941 command_runner.go:130] > Compiler:       gc
	I0805 23:46:24.669558   47941 command_runner.go:130] > Platform:       linux/amd64
	I0805 23:46:24.669562   47941 command_runner.go:130] > Linkmode:       dynamic
	I0805 23:46:24.669566   47941 command_runner.go:130] > BuildTags:      
	I0805 23:46:24.669570   47941 command_runner.go:130] >   containers_image_ostree_stub
	I0805 23:46:24.669574   47941 command_runner.go:130] >   exclude_graphdriver_btrfs
	I0805 23:46:24.669578   47941 command_runner.go:130] >   btrfs_noversion
	I0805 23:46:24.669582   47941 command_runner.go:130] >   exclude_graphdriver_devicemapper
	I0805 23:46:24.669587   47941 command_runner.go:130] >   libdm_no_deferred_remove
	I0805 23:46:24.669590   47941 command_runner.go:130] >   seccomp
	I0805 23:46:24.669594   47941 command_runner.go:130] > LDFlags:          unknown
	I0805 23:46:24.669598   47941 command_runner.go:130] > SeccompEnabled:   true
	I0805 23:46:24.669604   47941 command_runner.go:130] > AppArmorEnabled:  false
	I0805 23:46:24.673031   47941 out.go:177] * Preparing Kubernetes v1.30.3 on CRI-O 1.29.1 ...
	I0805 23:46:24.674650   47941 main.go:141] libmachine: (multinode-342677) Calling .GetIP
	I0805 23:46:24.677360   47941 main.go:141] libmachine: (multinode-342677) DBG | domain multinode-342677 has defined MAC address 52:54:00:90:94:1a in network mk-multinode-342677
	I0805 23:46:24.677741   47941 main.go:141] libmachine: (multinode-342677) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:90:94:1a", ip: ""} in network mk-multinode-342677: {Iface:virbr1 ExpiryTime:2024-08-06 00:39:05 +0000 UTC Type:0 Mac:52:54:00:90:94:1a Iaid: IPaddr:192.168.39.10 Prefix:24 Hostname:multinode-342677 Clientid:01:52:54:00:90:94:1a}
	I0805 23:46:24.677775   47941 main.go:141] libmachine: (multinode-342677) DBG | domain multinode-342677 has defined IP address 192.168.39.10 and MAC address 52:54:00:90:94:1a in network mk-multinode-342677
	I0805 23:46:24.678016   47941 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0805 23:46:24.682396   47941 command_runner.go:130] > 192.168.39.1	host.minikube.internal
	I0805 23:46:24.682569   47941 kubeadm.go:883] updating cluster {Name:multinode-342677 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19339/minikube-v1.33.1-1722248113-19339-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.
30.3 ClusterName:multinode-342677 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.10 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.89 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:false Worker:true} {Name:m03 IP:192.168.39.75 Port:0 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false
inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: Disable
Optimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0805 23:46:24.682702   47941 preload.go:131] Checking if preload exists for k8s version v1.30.3 and runtime crio
	I0805 23:46:24.682746   47941 ssh_runner.go:195] Run: sudo crictl images --output json
	I0805 23:46:24.728631   47941 command_runner.go:130] > {
	I0805 23:46:24.728657   47941 command_runner.go:130] >   "images": [
	I0805 23:46:24.728661   47941 command_runner.go:130] >     {
	I0805 23:46:24.728669   47941 command_runner.go:130] >       "id": "5cc3abe5717dbf8e0db6fc2e890c3f82fe95e2ce5c5d7320f98a5b71c767d42f",
	I0805 23:46:24.728673   47941 command_runner.go:130] >       "repoTags": [
	I0805 23:46:24.728679   47941 command_runner.go:130] >         "docker.io/kindest/kindnetd:v20240715-585640e9"
	I0805 23:46:24.728683   47941 command_runner.go:130] >       ],
	I0805 23:46:24.728687   47941 command_runner.go:130] >       "repoDigests": [
	I0805 23:46:24.728694   47941 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:3b93f681916ee780a9941d48cb20622486c08af54f8d87d801412bcca0832115",
	I0805 23:46:24.728701   47941 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:88ed2adbc140254762f98fad7f4b16d279117356ebaf95aebf191713c828a493"
	I0805 23:46:24.728705   47941 command_runner.go:130] >       ],
	I0805 23:46:24.728709   47941 command_runner.go:130] >       "size": "87165492",
	I0805 23:46:24.728715   47941 command_runner.go:130] >       "uid": null,
	I0805 23:46:24.728719   47941 command_runner.go:130] >       "username": "",
	I0805 23:46:24.728725   47941 command_runner.go:130] >       "spec": null,
	I0805 23:46:24.728729   47941 command_runner.go:130] >       "pinned": false
	I0805 23:46:24.728732   47941 command_runner.go:130] >     },
	I0805 23:46:24.728736   47941 command_runner.go:130] >     {
	I0805 23:46:24.728742   47941 command_runner.go:130] >       "id": "917d7814b9b5b870a04ef7bbee5414485a7ba8385be9c26e1df27463f6fb6557",
	I0805 23:46:24.728749   47941 command_runner.go:130] >       "repoTags": [
	I0805 23:46:24.728754   47941 command_runner.go:130] >         "docker.io/kindest/kindnetd:v20240730-75a5af0c"
	I0805 23:46:24.728758   47941 command_runner.go:130] >       ],
	I0805 23:46:24.728762   47941 command_runner.go:130] >       "repoDigests": [
	I0805 23:46:24.728769   47941 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:4067b91686869e19bac601aec305ba55d2e74cdcb91347869bfb4fd3a26cd3c3",
	I0805 23:46:24.728778   47941 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:60b58d454ebdf7f0f66e7550fc2be7b6f08dee8b39bedd62c26d42c3a5cf5c6a"
	I0805 23:46:24.728781   47941 command_runner.go:130] >       ],
	I0805 23:46:24.728785   47941 command_runner.go:130] >       "size": "87165492",
	I0805 23:46:24.728789   47941 command_runner.go:130] >       "uid": null,
	I0805 23:46:24.728795   47941 command_runner.go:130] >       "username": "",
	I0805 23:46:24.728801   47941 command_runner.go:130] >       "spec": null,
	I0805 23:46:24.728804   47941 command_runner.go:130] >       "pinned": false
	I0805 23:46:24.728810   47941 command_runner.go:130] >     },
	I0805 23:46:24.728816   47941 command_runner.go:130] >     {
	I0805 23:46:24.728822   47941 command_runner.go:130] >       "id": "8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a",
	I0805 23:46:24.728826   47941 command_runner.go:130] >       "repoTags": [
	I0805 23:46:24.728831   47941 command_runner.go:130] >         "gcr.io/k8s-minikube/busybox:1.28"
	I0805 23:46:24.728834   47941 command_runner.go:130] >       ],
	I0805 23:46:24.728838   47941 command_runner.go:130] >       "repoDigests": [
	I0805 23:46:24.728845   47941 command_runner.go:130] >         "gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335",
	I0805 23:46:24.728853   47941 command_runner.go:130] >         "gcr.io/k8s-minikube/busybox@sha256:9afb80db71730dbb303fe00765cbf34bddbdc6b66e49897fc2e1861967584b12"
	I0805 23:46:24.728857   47941 command_runner.go:130] >       ],
	I0805 23:46:24.728861   47941 command_runner.go:130] >       "size": "1363676",
	I0805 23:46:24.728865   47941 command_runner.go:130] >       "uid": null,
	I0805 23:46:24.728870   47941 command_runner.go:130] >       "username": "",
	I0805 23:46:24.728875   47941 command_runner.go:130] >       "spec": null,
	I0805 23:46:24.728881   47941 command_runner.go:130] >       "pinned": false
	I0805 23:46:24.728886   47941 command_runner.go:130] >     },
	I0805 23:46:24.728890   47941 command_runner.go:130] >     {
	I0805 23:46:24.728895   47941 command_runner.go:130] >       "id": "6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562",
	I0805 23:46:24.728902   47941 command_runner.go:130] >       "repoTags": [
	I0805 23:46:24.728907   47941 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner:v5"
	I0805 23:46:24.728912   47941 command_runner.go:130] >       ],
	I0805 23:46:24.728916   47941 command_runner.go:130] >       "repoDigests": [
	I0805 23:46:24.728923   47941 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944",
	I0805 23:46:24.728936   47941 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner@sha256:c4c05d6ad6c0f24d87b39e596d4dddf64bec3e0d84f5b36e4511d4ebf583f38f"
	I0805 23:46:24.728939   47941 command_runner.go:130] >       ],
	I0805 23:46:24.728944   47941 command_runner.go:130] >       "size": "31470524",
	I0805 23:46:24.728949   47941 command_runner.go:130] >       "uid": null,
	I0805 23:46:24.728953   47941 command_runner.go:130] >       "username": "",
	I0805 23:46:24.728958   47941 command_runner.go:130] >       "spec": null,
	I0805 23:46:24.728961   47941 command_runner.go:130] >       "pinned": false
	I0805 23:46:24.728964   47941 command_runner.go:130] >     },
	I0805 23:46:24.728968   47941 command_runner.go:130] >     {
	I0805 23:46:24.728973   47941 command_runner.go:130] >       "id": "cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4",
	I0805 23:46:24.728978   47941 command_runner.go:130] >       "repoTags": [
	I0805 23:46:24.728983   47941 command_runner.go:130] >         "registry.k8s.io/coredns/coredns:v1.11.1"
	I0805 23:46:24.728989   47941 command_runner.go:130] >       ],
	I0805 23:46:24.728993   47941 command_runner.go:130] >       "repoDigests": [
	I0805 23:46:24.729000   47941 command_runner.go:130] >         "registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1",
	I0805 23:46:24.729010   47941 command_runner.go:130] >         "registry.k8s.io/coredns/coredns@sha256:2169b3b96af988cf69d7dd69efbcc59433eb027320eb185c6110e0850b997870"
	I0805 23:46:24.729013   47941 command_runner.go:130] >       ],
	I0805 23:46:24.729016   47941 command_runner.go:130] >       "size": "61245718",
	I0805 23:46:24.729020   47941 command_runner.go:130] >       "uid": null,
	I0805 23:46:24.729024   47941 command_runner.go:130] >       "username": "nonroot",
	I0805 23:46:24.729028   47941 command_runner.go:130] >       "spec": null,
	I0805 23:46:24.729032   47941 command_runner.go:130] >       "pinned": false
	I0805 23:46:24.729035   47941 command_runner.go:130] >     },
	I0805 23:46:24.729039   47941 command_runner.go:130] >     {
	I0805 23:46:24.729045   47941 command_runner.go:130] >       "id": "3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899",
	I0805 23:46:24.729049   47941 command_runner.go:130] >       "repoTags": [
	I0805 23:46:24.729053   47941 command_runner.go:130] >         "registry.k8s.io/etcd:3.5.12-0"
	I0805 23:46:24.729056   47941 command_runner.go:130] >       ],
	I0805 23:46:24.729069   47941 command_runner.go:130] >       "repoDigests": [
	I0805 23:46:24.729078   47941 command_runner.go:130] >         "registry.k8s.io/etcd@sha256:2e6b9c67730f1f1dce4c6e16d60135e00608728567f537e8ff70c244756cbb62",
	I0805 23:46:24.729084   47941 command_runner.go:130] >         "registry.k8s.io/etcd@sha256:44a8e24dcbba3470ee1fee21d5e88d128c936e9b55d4bc51fbef8086f8ed123b"
	I0805 23:46:24.729090   47941 command_runner.go:130] >       ],
	I0805 23:46:24.729094   47941 command_runner.go:130] >       "size": "150779692",
	I0805 23:46:24.729098   47941 command_runner.go:130] >       "uid": {
	I0805 23:46:24.729103   47941 command_runner.go:130] >         "value": "0"
	I0805 23:46:24.729107   47941 command_runner.go:130] >       },
	I0805 23:46:24.729111   47941 command_runner.go:130] >       "username": "",
	I0805 23:46:24.729115   47941 command_runner.go:130] >       "spec": null,
	I0805 23:46:24.729119   47941 command_runner.go:130] >       "pinned": false
	I0805 23:46:24.729122   47941 command_runner.go:130] >     },
	I0805 23:46:24.729125   47941 command_runner.go:130] >     {
	I0805 23:46:24.729131   47941 command_runner.go:130] >       "id": "1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d",
	I0805 23:46:24.729154   47941 command_runner.go:130] >       "repoTags": [
	I0805 23:46:24.729165   47941 command_runner.go:130] >         "registry.k8s.io/kube-apiserver:v1.30.3"
	I0805 23:46:24.729168   47941 command_runner.go:130] >       ],
	I0805 23:46:24.729172   47941 command_runner.go:130] >       "repoDigests": [
	I0805 23:46:24.729178   47941 command_runner.go:130] >         "registry.k8s.io/kube-apiserver@sha256:a36d558835e48950f6d13b1edbe20605b8dfbc81e088f58221796631e107966c",
	I0805 23:46:24.729186   47941 command_runner.go:130] >         "registry.k8s.io/kube-apiserver@sha256:a3a6c80030a6e720734ae3291448388f70b6f1d463f103e4f06f358f8a170315"
	I0805 23:46:24.729189   47941 command_runner.go:130] >       ],
	I0805 23:46:24.729195   47941 command_runner.go:130] >       "size": "117609954",
	I0805 23:46:24.729201   47941 command_runner.go:130] >       "uid": {
	I0805 23:46:24.729205   47941 command_runner.go:130] >         "value": "0"
	I0805 23:46:24.729208   47941 command_runner.go:130] >       },
	I0805 23:46:24.729212   47941 command_runner.go:130] >       "username": "",
	I0805 23:46:24.729215   47941 command_runner.go:130] >       "spec": null,
	I0805 23:46:24.729219   47941 command_runner.go:130] >       "pinned": false
	I0805 23:46:24.729223   47941 command_runner.go:130] >     },
	I0805 23:46:24.729226   47941 command_runner.go:130] >     {
	I0805 23:46:24.729232   47941 command_runner.go:130] >       "id": "76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e",
	I0805 23:46:24.729238   47941 command_runner.go:130] >       "repoTags": [
	I0805 23:46:24.729244   47941 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager:v1.30.3"
	I0805 23:46:24.729247   47941 command_runner.go:130] >       ],
	I0805 23:46:24.729252   47941 command_runner.go:130] >       "repoDigests": [
	I0805 23:46:24.729266   47941 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager@sha256:eff43da55a29a5e66ec9480f28233d733a6a8433b7a46f6e8c07086fa4ef69b7",
	I0805 23:46:24.729276   47941 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager@sha256:fa179d147c6bacddd1586f6d12ff79a844e951c7b159fdcb92cdf56f3033d91e"
	I0805 23:46:24.729280   47941 command_runner.go:130] >       ],
	I0805 23:46:24.729286   47941 command_runner.go:130] >       "size": "112198984",
	I0805 23:46:24.729291   47941 command_runner.go:130] >       "uid": {
	I0805 23:46:24.729295   47941 command_runner.go:130] >         "value": "0"
	I0805 23:46:24.729298   47941 command_runner.go:130] >       },
	I0805 23:46:24.729302   47941 command_runner.go:130] >       "username": "",
	I0805 23:46:24.729306   47941 command_runner.go:130] >       "spec": null,
	I0805 23:46:24.729310   47941 command_runner.go:130] >       "pinned": false
	I0805 23:46:24.729313   47941 command_runner.go:130] >     },
	I0805 23:46:24.729316   47941 command_runner.go:130] >     {
	I0805 23:46:24.729322   47941 command_runner.go:130] >       "id": "55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1",
	I0805 23:46:24.729325   47941 command_runner.go:130] >       "repoTags": [
	I0805 23:46:24.729329   47941 command_runner.go:130] >         "registry.k8s.io/kube-proxy:v1.30.3"
	I0805 23:46:24.729332   47941 command_runner.go:130] >       ],
	I0805 23:46:24.729336   47941 command_runner.go:130] >       "repoDigests": [
	I0805 23:46:24.729345   47941 command_runner.go:130] >         "registry.k8s.io/kube-proxy@sha256:8c178447597867a03bbcdf0d1ce43fc8f6807ead2321bd1ec0e845a2f12dad80",
	I0805 23:46:24.729353   47941 command_runner.go:130] >         "registry.k8s.io/kube-proxy@sha256:b26e535e8ee1cbd7dc5642fb61bd36e9d23f32e9242ae0010b2905656e664f65"
	I0805 23:46:24.729358   47941 command_runner.go:130] >       ],
	I0805 23:46:24.729362   47941 command_runner.go:130] >       "size": "85953945",
	I0805 23:46:24.729366   47941 command_runner.go:130] >       "uid": null,
	I0805 23:46:24.729370   47941 command_runner.go:130] >       "username": "",
	I0805 23:46:24.729374   47941 command_runner.go:130] >       "spec": null,
	I0805 23:46:24.729378   47941 command_runner.go:130] >       "pinned": false
	I0805 23:46:24.729381   47941 command_runner.go:130] >     },
	I0805 23:46:24.729384   47941 command_runner.go:130] >     {
	I0805 23:46:24.729390   47941 command_runner.go:130] >       "id": "3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2",
	I0805 23:46:24.729395   47941 command_runner.go:130] >       "repoTags": [
	I0805 23:46:24.729400   47941 command_runner.go:130] >         "registry.k8s.io/kube-scheduler:v1.30.3"
	I0805 23:46:24.729403   47941 command_runner.go:130] >       ],
	I0805 23:46:24.729407   47941 command_runner.go:130] >       "repoDigests": [
	I0805 23:46:24.729417   47941 command_runner.go:130] >         "registry.k8s.io/kube-scheduler@sha256:1738178fb116d10e7cde2cfc3671f5dfdad518d773677af740483f2dfe674266",
	I0805 23:46:24.729424   47941 command_runner.go:130] >         "registry.k8s.io/kube-scheduler@sha256:2147ab5d2c73dd84e28332fcbee6826d1648eed30a531a52a96501b37d7ee4e4"
	I0805 23:46:24.729430   47941 command_runner.go:130] >       ],
	I0805 23:46:24.729434   47941 command_runner.go:130] >       "size": "63051080",
	I0805 23:46:24.729437   47941 command_runner.go:130] >       "uid": {
	I0805 23:46:24.729441   47941 command_runner.go:130] >         "value": "0"
	I0805 23:46:24.729444   47941 command_runner.go:130] >       },
	I0805 23:46:24.729448   47941 command_runner.go:130] >       "username": "",
	I0805 23:46:24.729454   47941 command_runner.go:130] >       "spec": null,
	I0805 23:46:24.729458   47941 command_runner.go:130] >       "pinned": false
	I0805 23:46:24.729462   47941 command_runner.go:130] >     },
	I0805 23:46:24.729465   47941 command_runner.go:130] >     {
	I0805 23:46:24.729471   47941 command_runner.go:130] >       "id": "e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c",
	I0805 23:46:24.729477   47941 command_runner.go:130] >       "repoTags": [
	I0805 23:46:24.729481   47941 command_runner.go:130] >         "registry.k8s.io/pause:3.9"
	I0805 23:46:24.729484   47941 command_runner.go:130] >       ],
	I0805 23:46:24.729488   47941 command_runner.go:130] >       "repoDigests": [
	I0805 23:46:24.729494   47941 command_runner.go:130] >         "registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097",
	I0805 23:46:24.729501   47941 command_runner.go:130] >         "registry.k8s.io/pause@sha256:8d4106c88ec0bd28001e34c975d65175d994072d65341f62a8ab0754b0fafe10"
	I0805 23:46:24.729504   47941 command_runner.go:130] >       ],
	I0805 23:46:24.729508   47941 command_runner.go:130] >       "size": "750414",
	I0805 23:46:24.729512   47941 command_runner.go:130] >       "uid": {
	I0805 23:46:24.729516   47941 command_runner.go:130] >         "value": "65535"
	I0805 23:46:24.729521   47941 command_runner.go:130] >       },
	I0805 23:46:24.729525   47941 command_runner.go:130] >       "username": "",
	I0805 23:46:24.729529   47941 command_runner.go:130] >       "spec": null,
	I0805 23:46:24.729536   47941 command_runner.go:130] >       "pinned": true
	I0805 23:46:24.729541   47941 command_runner.go:130] >     }
	I0805 23:46:24.729545   47941 command_runner.go:130] >   ]
	I0805 23:46:24.729548   47941 command_runner.go:130] > }
	I0805 23:46:24.730333   47941 crio.go:514] all images are preloaded for cri-o runtime.
	I0805 23:46:24.730346   47941 crio.go:433] Images already preloaded, skipping extraction
	I0805 23:46:24.730397   47941 ssh_runner.go:195] Run: sudo crictl images --output json
	I0805 23:46:24.766588   47941 command_runner.go:130] > {
	I0805 23:46:24.766612   47941 command_runner.go:130] >   "images": [
	I0805 23:46:24.766618   47941 command_runner.go:130] >     {
	I0805 23:46:24.766629   47941 command_runner.go:130] >       "id": "5cc3abe5717dbf8e0db6fc2e890c3f82fe95e2ce5c5d7320f98a5b71c767d42f",
	I0805 23:46:24.766634   47941 command_runner.go:130] >       "repoTags": [
	I0805 23:46:24.766640   47941 command_runner.go:130] >         "docker.io/kindest/kindnetd:v20240715-585640e9"
	I0805 23:46:24.766643   47941 command_runner.go:130] >       ],
	I0805 23:46:24.766647   47941 command_runner.go:130] >       "repoDigests": [
	I0805 23:46:24.766654   47941 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:3b93f681916ee780a9941d48cb20622486c08af54f8d87d801412bcca0832115",
	I0805 23:46:24.766663   47941 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:88ed2adbc140254762f98fad7f4b16d279117356ebaf95aebf191713c828a493"
	I0805 23:46:24.766667   47941 command_runner.go:130] >       ],
	I0805 23:46:24.766671   47941 command_runner.go:130] >       "size": "87165492",
	I0805 23:46:24.766676   47941 command_runner.go:130] >       "uid": null,
	I0805 23:46:24.766680   47941 command_runner.go:130] >       "username": "",
	I0805 23:46:24.766687   47941 command_runner.go:130] >       "spec": null,
	I0805 23:46:24.766693   47941 command_runner.go:130] >       "pinned": false
	I0805 23:46:24.766697   47941 command_runner.go:130] >     },
	I0805 23:46:24.766701   47941 command_runner.go:130] >     {
	I0805 23:46:24.766707   47941 command_runner.go:130] >       "id": "917d7814b9b5b870a04ef7bbee5414485a7ba8385be9c26e1df27463f6fb6557",
	I0805 23:46:24.766714   47941 command_runner.go:130] >       "repoTags": [
	I0805 23:46:24.766719   47941 command_runner.go:130] >         "docker.io/kindest/kindnetd:v20240730-75a5af0c"
	I0805 23:46:24.766723   47941 command_runner.go:130] >       ],
	I0805 23:46:24.766727   47941 command_runner.go:130] >       "repoDigests": [
	I0805 23:46:24.766734   47941 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:4067b91686869e19bac601aec305ba55d2e74cdcb91347869bfb4fd3a26cd3c3",
	I0805 23:46:24.766741   47941 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:60b58d454ebdf7f0f66e7550fc2be7b6f08dee8b39bedd62c26d42c3a5cf5c6a"
	I0805 23:46:24.766744   47941 command_runner.go:130] >       ],
	I0805 23:46:24.766749   47941 command_runner.go:130] >       "size": "87165492",
	I0805 23:46:24.766755   47941 command_runner.go:130] >       "uid": null,
	I0805 23:46:24.766761   47941 command_runner.go:130] >       "username": "",
	I0805 23:46:24.766764   47941 command_runner.go:130] >       "spec": null,
	I0805 23:46:24.766771   47941 command_runner.go:130] >       "pinned": false
	I0805 23:46:24.766773   47941 command_runner.go:130] >     },
	I0805 23:46:24.766776   47941 command_runner.go:130] >     {
	I0805 23:46:24.766782   47941 command_runner.go:130] >       "id": "8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a",
	I0805 23:46:24.766786   47941 command_runner.go:130] >       "repoTags": [
	I0805 23:46:24.766791   47941 command_runner.go:130] >         "gcr.io/k8s-minikube/busybox:1.28"
	I0805 23:46:24.766794   47941 command_runner.go:130] >       ],
	I0805 23:46:24.766798   47941 command_runner.go:130] >       "repoDigests": [
	I0805 23:46:24.766805   47941 command_runner.go:130] >         "gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335",
	I0805 23:46:24.766816   47941 command_runner.go:130] >         "gcr.io/k8s-minikube/busybox@sha256:9afb80db71730dbb303fe00765cbf34bddbdc6b66e49897fc2e1861967584b12"
	I0805 23:46:24.766823   47941 command_runner.go:130] >       ],
	I0805 23:46:24.766829   47941 command_runner.go:130] >       "size": "1363676",
	I0805 23:46:24.766838   47941 command_runner.go:130] >       "uid": null,
	I0805 23:46:24.766842   47941 command_runner.go:130] >       "username": "",
	I0805 23:46:24.766857   47941 command_runner.go:130] >       "spec": null,
	I0805 23:46:24.766860   47941 command_runner.go:130] >       "pinned": false
	I0805 23:46:24.766863   47941 command_runner.go:130] >     },
	I0805 23:46:24.766867   47941 command_runner.go:130] >     {
	I0805 23:46:24.766875   47941 command_runner.go:130] >       "id": "6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562",
	I0805 23:46:24.766882   47941 command_runner.go:130] >       "repoTags": [
	I0805 23:46:24.766887   47941 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner:v5"
	I0805 23:46:24.766893   47941 command_runner.go:130] >       ],
	I0805 23:46:24.766897   47941 command_runner.go:130] >       "repoDigests": [
	I0805 23:46:24.766906   47941 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944",
	I0805 23:46:24.766918   47941 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner@sha256:c4c05d6ad6c0f24d87b39e596d4dddf64bec3e0d84f5b36e4511d4ebf583f38f"
	I0805 23:46:24.766923   47941 command_runner.go:130] >       ],
	I0805 23:46:24.766928   47941 command_runner.go:130] >       "size": "31470524",
	I0805 23:46:24.766934   47941 command_runner.go:130] >       "uid": null,
	I0805 23:46:24.766938   47941 command_runner.go:130] >       "username": "",
	I0805 23:46:24.766944   47941 command_runner.go:130] >       "spec": null,
	I0805 23:46:24.766948   47941 command_runner.go:130] >       "pinned": false
	I0805 23:46:24.766954   47941 command_runner.go:130] >     },
	I0805 23:46:24.766957   47941 command_runner.go:130] >     {
	I0805 23:46:24.766965   47941 command_runner.go:130] >       "id": "cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4",
	I0805 23:46:24.766969   47941 command_runner.go:130] >       "repoTags": [
	I0805 23:46:24.766976   47941 command_runner.go:130] >         "registry.k8s.io/coredns/coredns:v1.11.1"
	I0805 23:46:24.766980   47941 command_runner.go:130] >       ],
	I0805 23:46:24.766986   47941 command_runner.go:130] >       "repoDigests": [
	I0805 23:46:24.766993   47941 command_runner.go:130] >         "registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1",
	I0805 23:46:24.767003   47941 command_runner.go:130] >         "registry.k8s.io/coredns/coredns@sha256:2169b3b96af988cf69d7dd69efbcc59433eb027320eb185c6110e0850b997870"
	I0805 23:46:24.767009   47941 command_runner.go:130] >       ],
	I0805 23:46:24.767013   47941 command_runner.go:130] >       "size": "61245718",
	I0805 23:46:24.767018   47941 command_runner.go:130] >       "uid": null,
	I0805 23:46:24.767023   47941 command_runner.go:130] >       "username": "nonroot",
	I0805 23:46:24.767029   47941 command_runner.go:130] >       "spec": null,
	I0805 23:46:24.767033   47941 command_runner.go:130] >       "pinned": false
	I0805 23:46:24.767038   47941 command_runner.go:130] >     },
	I0805 23:46:24.767042   47941 command_runner.go:130] >     {
	I0805 23:46:24.767056   47941 command_runner.go:130] >       "id": "3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899",
	I0805 23:46:24.767072   47941 command_runner.go:130] >       "repoTags": [
	I0805 23:46:24.767077   47941 command_runner.go:130] >         "registry.k8s.io/etcd:3.5.12-0"
	I0805 23:46:24.767080   47941 command_runner.go:130] >       ],
	I0805 23:46:24.767084   47941 command_runner.go:130] >       "repoDigests": [
	I0805 23:46:24.767093   47941 command_runner.go:130] >         "registry.k8s.io/etcd@sha256:2e6b9c67730f1f1dce4c6e16d60135e00608728567f537e8ff70c244756cbb62",
	I0805 23:46:24.767100   47941 command_runner.go:130] >         "registry.k8s.io/etcd@sha256:44a8e24dcbba3470ee1fee21d5e88d128c936e9b55d4bc51fbef8086f8ed123b"
	I0805 23:46:24.767106   47941 command_runner.go:130] >       ],
	I0805 23:46:24.767111   47941 command_runner.go:130] >       "size": "150779692",
	I0805 23:46:24.767117   47941 command_runner.go:130] >       "uid": {
	I0805 23:46:24.767121   47941 command_runner.go:130] >         "value": "0"
	I0805 23:46:24.767129   47941 command_runner.go:130] >       },
	I0805 23:46:24.767133   47941 command_runner.go:130] >       "username": "",
	I0805 23:46:24.767139   47941 command_runner.go:130] >       "spec": null,
	I0805 23:46:24.767143   47941 command_runner.go:130] >       "pinned": false
	I0805 23:46:24.767149   47941 command_runner.go:130] >     },
	I0805 23:46:24.767155   47941 command_runner.go:130] >     {
	I0805 23:46:24.767163   47941 command_runner.go:130] >       "id": "1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d",
	I0805 23:46:24.767169   47941 command_runner.go:130] >       "repoTags": [
	I0805 23:46:24.767174   47941 command_runner.go:130] >         "registry.k8s.io/kube-apiserver:v1.30.3"
	I0805 23:46:24.767180   47941 command_runner.go:130] >       ],
	I0805 23:46:24.767184   47941 command_runner.go:130] >       "repoDigests": [
	I0805 23:46:24.767193   47941 command_runner.go:130] >         "registry.k8s.io/kube-apiserver@sha256:a36d558835e48950f6d13b1edbe20605b8dfbc81e088f58221796631e107966c",
	I0805 23:46:24.767202   47941 command_runner.go:130] >         "registry.k8s.io/kube-apiserver@sha256:a3a6c80030a6e720734ae3291448388f70b6f1d463f103e4f06f358f8a170315"
	I0805 23:46:24.767208   47941 command_runner.go:130] >       ],
	I0805 23:46:24.767212   47941 command_runner.go:130] >       "size": "117609954",
	I0805 23:46:24.767218   47941 command_runner.go:130] >       "uid": {
	I0805 23:46:24.767222   47941 command_runner.go:130] >         "value": "0"
	I0805 23:46:24.767228   47941 command_runner.go:130] >       },
	I0805 23:46:24.767232   47941 command_runner.go:130] >       "username": "",
	I0805 23:46:24.767238   47941 command_runner.go:130] >       "spec": null,
	I0805 23:46:24.767244   47941 command_runner.go:130] >       "pinned": false
	I0805 23:46:24.767250   47941 command_runner.go:130] >     },
	I0805 23:46:24.767253   47941 command_runner.go:130] >     {
	I0805 23:46:24.767261   47941 command_runner.go:130] >       "id": "76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e",
	I0805 23:46:24.767267   47941 command_runner.go:130] >       "repoTags": [
	I0805 23:46:24.767272   47941 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager:v1.30.3"
	I0805 23:46:24.767278   47941 command_runner.go:130] >       ],
	I0805 23:46:24.767281   47941 command_runner.go:130] >       "repoDigests": [
	I0805 23:46:24.767296   47941 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager@sha256:eff43da55a29a5e66ec9480f28233d733a6a8433b7a46f6e8c07086fa4ef69b7",
	I0805 23:46:24.767306   47941 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager@sha256:fa179d147c6bacddd1586f6d12ff79a844e951c7b159fdcb92cdf56f3033d91e"
	I0805 23:46:24.767311   47941 command_runner.go:130] >       ],
	I0805 23:46:24.767316   47941 command_runner.go:130] >       "size": "112198984",
	I0805 23:46:24.767322   47941 command_runner.go:130] >       "uid": {
	I0805 23:46:24.767326   47941 command_runner.go:130] >         "value": "0"
	I0805 23:46:24.767332   47941 command_runner.go:130] >       },
	I0805 23:46:24.767336   47941 command_runner.go:130] >       "username": "",
	I0805 23:46:24.767342   47941 command_runner.go:130] >       "spec": null,
	I0805 23:46:24.767346   47941 command_runner.go:130] >       "pinned": false
	I0805 23:46:24.767351   47941 command_runner.go:130] >     },
	I0805 23:46:24.767354   47941 command_runner.go:130] >     {
	I0805 23:46:24.767362   47941 command_runner.go:130] >       "id": "55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1",
	I0805 23:46:24.767368   47941 command_runner.go:130] >       "repoTags": [
	I0805 23:46:24.767373   47941 command_runner.go:130] >         "registry.k8s.io/kube-proxy:v1.30.3"
	I0805 23:46:24.767378   47941 command_runner.go:130] >       ],
	I0805 23:46:24.767383   47941 command_runner.go:130] >       "repoDigests": [
	I0805 23:46:24.767391   47941 command_runner.go:130] >         "registry.k8s.io/kube-proxy@sha256:8c178447597867a03bbcdf0d1ce43fc8f6807ead2321bd1ec0e845a2f12dad80",
	I0805 23:46:24.767402   47941 command_runner.go:130] >         "registry.k8s.io/kube-proxy@sha256:b26e535e8ee1cbd7dc5642fb61bd36e9d23f32e9242ae0010b2905656e664f65"
	I0805 23:46:24.767407   47941 command_runner.go:130] >       ],
	I0805 23:46:24.767411   47941 command_runner.go:130] >       "size": "85953945",
	I0805 23:46:24.767417   47941 command_runner.go:130] >       "uid": null,
	I0805 23:46:24.767421   47941 command_runner.go:130] >       "username": "",
	I0805 23:46:24.767426   47941 command_runner.go:130] >       "spec": null,
	I0805 23:46:24.767430   47941 command_runner.go:130] >       "pinned": false
	I0805 23:46:24.767435   47941 command_runner.go:130] >     },
	I0805 23:46:24.767439   47941 command_runner.go:130] >     {
	I0805 23:46:24.767445   47941 command_runner.go:130] >       "id": "3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2",
	I0805 23:46:24.767451   47941 command_runner.go:130] >       "repoTags": [
	I0805 23:46:24.767456   47941 command_runner.go:130] >         "registry.k8s.io/kube-scheduler:v1.30.3"
	I0805 23:46:24.767461   47941 command_runner.go:130] >       ],
	I0805 23:46:24.767465   47941 command_runner.go:130] >       "repoDigests": [
	I0805 23:46:24.767474   47941 command_runner.go:130] >         "registry.k8s.io/kube-scheduler@sha256:1738178fb116d10e7cde2cfc3671f5dfdad518d773677af740483f2dfe674266",
	I0805 23:46:24.767481   47941 command_runner.go:130] >         "registry.k8s.io/kube-scheduler@sha256:2147ab5d2c73dd84e28332fcbee6826d1648eed30a531a52a96501b37d7ee4e4"
	I0805 23:46:24.767487   47941 command_runner.go:130] >       ],
	I0805 23:46:24.767491   47941 command_runner.go:130] >       "size": "63051080",
	I0805 23:46:24.767497   47941 command_runner.go:130] >       "uid": {
	I0805 23:46:24.767501   47941 command_runner.go:130] >         "value": "0"
	I0805 23:46:24.767506   47941 command_runner.go:130] >       },
	I0805 23:46:24.767510   47941 command_runner.go:130] >       "username": "",
	I0805 23:46:24.767516   47941 command_runner.go:130] >       "spec": null,
	I0805 23:46:24.767520   47941 command_runner.go:130] >       "pinned": false
	I0805 23:46:24.767525   47941 command_runner.go:130] >     },
	I0805 23:46:24.767529   47941 command_runner.go:130] >     {
	I0805 23:46:24.767537   47941 command_runner.go:130] >       "id": "e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c",
	I0805 23:46:24.767542   47941 command_runner.go:130] >       "repoTags": [
	I0805 23:46:24.767546   47941 command_runner.go:130] >         "registry.k8s.io/pause:3.9"
	I0805 23:46:24.767551   47941 command_runner.go:130] >       ],
	I0805 23:46:24.767555   47941 command_runner.go:130] >       "repoDigests": [
	I0805 23:46:24.767564   47941 command_runner.go:130] >         "registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097",
	I0805 23:46:24.767572   47941 command_runner.go:130] >         "registry.k8s.io/pause@sha256:8d4106c88ec0bd28001e34c975d65175d994072d65341f62a8ab0754b0fafe10"
	I0805 23:46:24.767578   47941 command_runner.go:130] >       ],
	I0805 23:46:24.767582   47941 command_runner.go:130] >       "size": "750414",
	I0805 23:46:24.767589   47941 command_runner.go:130] >       "uid": {
	I0805 23:46:24.767593   47941 command_runner.go:130] >         "value": "65535"
	I0805 23:46:24.767597   47941 command_runner.go:130] >       },
	I0805 23:46:24.767601   47941 command_runner.go:130] >       "username": "",
	I0805 23:46:24.767606   47941 command_runner.go:130] >       "spec": null,
	I0805 23:46:24.767610   47941 command_runner.go:130] >       "pinned": true
	I0805 23:46:24.767616   47941 command_runner.go:130] >     }
	I0805 23:46:24.767619   47941 command_runner.go:130] >   ]
	I0805 23:46:24.767624   47941 command_runner.go:130] > }
	I0805 23:46:24.767731   47941 crio.go:514] all images are preloaded for cri-o runtime.
	I0805 23:46:24.767740   47941 cache_images.go:84] Images are preloaded, skipping loading
	I0805 23:46:24.767747   47941 kubeadm.go:934] updating node { 192.168.39.10 8443 v1.30.3 crio true true} ...
	I0805 23:46:24.767840   47941 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.30.3/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=multinode-342677 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.10
	
	[Install]
	 config:
	{KubernetesVersion:v1.30.3 ClusterName:multinode-342677 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0805 23:46:24.767900   47941 ssh_runner.go:195] Run: crio config
	I0805 23:46:24.801234   47941 command_runner.go:130] ! time="2024-08-05 23:46:24.782826247Z" level=info msg="Starting CRI-O, version: 1.29.1, git: unknown(clean)"
	I0805 23:46:24.808009   47941 command_runner.go:130] ! level=info msg="Using default capabilities: CAP_CHOWN, CAP_DAC_OVERRIDE, CAP_FSETID, CAP_FOWNER, CAP_SETGID, CAP_SETUID, CAP_SETPCAP, CAP_NET_BIND_SERVICE, CAP_KILL"
	I0805 23:46:24.812926   47941 command_runner.go:130] > # The CRI-O configuration file specifies all of the available configuration
	I0805 23:46:24.812951   47941 command_runner.go:130] > # options and command-line flags for the crio(8) OCI Kubernetes Container Runtime
	I0805 23:46:24.812960   47941 command_runner.go:130] > # daemon, but in a TOML format that can be more easily modified and versioned.
	I0805 23:46:24.812965   47941 command_runner.go:130] > #
	I0805 23:46:24.812975   47941 command_runner.go:130] > # Please refer to crio.conf(5) for details of all configuration options.
	I0805 23:46:24.812985   47941 command_runner.go:130] > # CRI-O supports partial configuration reload during runtime, which can be
	I0805 23:46:24.812994   47941 command_runner.go:130] > # done by sending SIGHUP to the running process. Currently supported options
	I0805 23:46:24.813001   47941 command_runner.go:130] > # are explicitly mentioned with: 'This option supports live configuration
	I0805 23:46:24.813006   47941 command_runner.go:130] > # reload'.
	I0805 23:46:24.813011   47941 command_runner.go:130] > # CRI-O reads its storage defaults from the containers-storage.conf(5) file
	I0805 23:46:24.813020   47941 command_runner.go:130] > # located at /etc/containers/storage.conf. Modify this storage configuration if
	I0805 23:46:24.813026   47941 command_runner.go:130] > # you want to change the system's defaults. If you want to modify storage just
	I0805 23:46:24.813034   47941 command_runner.go:130] > # for CRI-O, you can change the storage configuration options here.
	I0805 23:46:24.813037   47941 command_runner.go:130] > [crio]
	I0805 23:46:24.813043   47941 command_runner.go:130] > # Path to the "root directory". CRI-O stores all of its data, including
	I0805 23:46:24.813050   47941 command_runner.go:130] > # containers images, in this directory.
	I0805 23:46:24.813054   47941 command_runner.go:130] > root = "/var/lib/containers/storage"
	I0805 23:46:24.813066   47941 command_runner.go:130] > # Path to the "run directory". CRI-O stores all of its state in this directory.
	I0805 23:46:24.813073   47941 command_runner.go:130] > runroot = "/var/run/containers/storage"
	I0805 23:46:24.813081   47941 command_runner.go:130] > # Path to the "imagestore". If CRI-O stores all of its images in this directory differently than Root.
	I0805 23:46:24.813087   47941 command_runner.go:130] > # imagestore = ""
	I0805 23:46:24.813093   47941 command_runner.go:130] > # Storage driver used to manage the storage of images and containers. Please
	I0805 23:46:24.813101   47941 command_runner.go:130] > # refer to containers-storage.conf(5) to see all available storage drivers.
	I0805 23:46:24.813105   47941 command_runner.go:130] > storage_driver = "overlay"
	I0805 23:46:24.813111   47941 command_runner.go:130] > # List to pass options to the storage driver. Please refer to
	I0805 23:46:24.813116   47941 command_runner.go:130] > # containers-storage.conf(5) to see all available storage options.
	I0805 23:46:24.813140   47941 command_runner.go:130] > storage_option = [
	I0805 23:46:24.813153   47941 command_runner.go:130] > 	"overlay.mountopt=nodev,metacopy=on",
	I0805 23:46:24.813157   47941 command_runner.go:130] > ]
	I0805 23:46:24.813167   47941 command_runner.go:130] > # The default log directory where all logs will go unless directly specified by
	I0805 23:46:24.813180   47941 command_runner.go:130] > # the kubelet. The log directory specified must be an absolute directory.
	I0805 23:46:24.813188   47941 command_runner.go:130] > # log_dir = "/var/log/crio/pods"
	I0805 23:46:24.813196   47941 command_runner.go:130] > # Location for CRI-O to lay down the temporary version file.
	I0805 23:46:24.813203   47941 command_runner.go:130] > # It is used to check if crio wipe should wipe containers, which should
	I0805 23:46:24.813211   47941 command_runner.go:130] > # always happen on a node reboot
	I0805 23:46:24.813218   47941 command_runner.go:130] > # version_file = "/var/run/crio/version"
	I0805 23:46:24.813227   47941 command_runner.go:130] > # Location for CRI-O to lay down the persistent version file.
	I0805 23:46:24.813236   47941 command_runner.go:130] > # It is used to check if crio wipe should wipe images, which should
	I0805 23:46:24.813246   47941 command_runner.go:130] > # only happen when CRI-O has been upgraded
	I0805 23:46:24.813257   47941 command_runner.go:130] > version_file_persist = "/var/lib/crio/version"
	I0805 23:46:24.813273   47941 command_runner.go:130] > # InternalWipe is whether CRI-O should wipe containers and images after a reboot when the server starts.
	I0805 23:46:24.813288   47941 command_runner.go:130] > # If set to false, one must use the external command 'crio wipe' to wipe the containers and images in these situations.
	I0805 23:46:24.813296   47941 command_runner.go:130] > # internal_wipe = true
	I0805 23:46:24.813305   47941 command_runner.go:130] > # InternalRepair is whether CRI-O should check if the container and image storage was corrupted after a sudden restart.
	I0805 23:46:24.813313   47941 command_runner.go:130] > # If it was, CRI-O also attempts to repair the storage.
	I0805 23:46:24.813317   47941 command_runner.go:130] > # internal_repair = false
	I0805 23:46:24.813323   47941 command_runner.go:130] > # Location for CRI-O to lay down the clean shutdown file.
	I0805 23:46:24.813332   47941 command_runner.go:130] > # It is used to check whether crio had time to sync before shutting down.
	I0805 23:46:24.813343   47941 command_runner.go:130] > # If not found, crio wipe will clear the storage directory.
	I0805 23:46:24.813355   47941 command_runner.go:130] > # clean_shutdown_file = "/var/lib/crio/clean.shutdown"
	I0805 23:46:24.813367   47941 command_runner.go:130] > # The crio.api table contains settings for the kubelet/gRPC interface.
	I0805 23:46:24.813376   47941 command_runner.go:130] > [crio.api]
	I0805 23:46:24.813390   47941 command_runner.go:130] > # Path to AF_LOCAL socket on which CRI-O will listen.
	I0805 23:46:24.813400   47941 command_runner.go:130] > # listen = "/var/run/crio/crio.sock"
	I0805 23:46:24.813409   47941 command_runner.go:130] > # IP address on which the stream server will listen.
	I0805 23:46:24.813416   47941 command_runner.go:130] > # stream_address = "127.0.0.1"
	I0805 23:46:24.813423   47941 command_runner.go:130] > # The port on which the stream server will listen. If the port is set to "0", then
	I0805 23:46:24.813431   47941 command_runner.go:130] > # CRI-O will allocate a random free port number.
	I0805 23:46:24.813440   47941 command_runner.go:130] > # stream_port = "0"
	I0805 23:46:24.813451   47941 command_runner.go:130] > # Enable encrypted TLS transport of the stream server.
	I0805 23:46:24.813458   47941 command_runner.go:130] > # stream_enable_tls = false
	I0805 23:46:24.813470   47941 command_runner.go:130] > # Length of time until open streams terminate due to lack of activity
	I0805 23:46:24.813479   47941 command_runner.go:130] > # stream_idle_timeout = ""
	I0805 23:46:24.813495   47941 command_runner.go:130] > # Path to the x509 certificate file used to serve the encrypted stream. This
	I0805 23:46:24.813507   47941 command_runner.go:130] > # file can change, and CRI-O will automatically pick up the changes within 5
	I0805 23:46:24.813515   47941 command_runner.go:130] > # minutes.
	I0805 23:46:24.813522   47941 command_runner.go:130] > # stream_tls_cert = ""
	I0805 23:46:24.813531   47941 command_runner.go:130] > # Path to the key file used to serve the encrypted stream. This file can
	I0805 23:46:24.813544   47941 command_runner.go:130] > # change and CRI-O will automatically pick up the changes within 5 minutes.
	I0805 23:46:24.813554   47941 command_runner.go:130] > # stream_tls_key = ""
	I0805 23:46:24.813567   47941 command_runner.go:130] > # Path to the x509 CA(s) file used to verify and authenticate client
	I0805 23:46:24.813579   47941 command_runner.go:130] > # communication with the encrypted stream. This file can change and CRI-O will
	I0805 23:46:24.813598   47941 command_runner.go:130] > # automatically pick up the changes within 5 minutes.
	I0805 23:46:24.813605   47941 command_runner.go:130] > # stream_tls_ca = ""
	I0805 23:46:24.813613   47941 command_runner.go:130] > # Maximum grpc send message size in bytes. If not set or <=0, then CRI-O will default to 80 * 1024 * 1024.
	I0805 23:46:24.813623   47941 command_runner.go:130] > grpc_max_send_msg_size = 16777216
	I0805 23:46:24.813637   47941 command_runner.go:130] > # Maximum grpc receive message size. If not set or <= 0, then CRI-O will default to 80 * 1024 * 1024.
	I0805 23:46:24.813649   47941 command_runner.go:130] > grpc_max_recv_msg_size = 16777216
	I0805 23:46:24.813660   47941 command_runner.go:130] > # The crio.runtime table contains settings pertaining to the OCI runtime used
	I0805 23:46:24.813671   47941 command_runner.go:130] > # and options for how to set up and manage the OCI runtime.
	I0805 23:46:24.813678   47941 command_runner.go:130] > [crio.runtime]
	I0805 23:46:24.813688   47941 command_runner.go:130] > # A list of ulimits to be set in containers by default, specified as
	I0805 23:46:24.813699   47941 command_runner.go:130] > # "<ulimit name>=<soft limit>:<hard limit>", for example:
	I0805 23:46:24.813703   47941 command_runner.go:130] > # "nofile=1024:2048"
	I0805 23:46:24.813709   47941 command_runner.go:130] > # If nothing is set here, settings will be inherited from the CRI-O daemon
	I0805 23:46:24.813717   47941 command_runner.go:130] > # default_ulimits = [
	I0805 23:46:24.813723   47941 command_runner.go:130] > # ]
	I0805 23:46:24.813736   47941 command_runner.go:130] > # If true, the runtime will not use pivot_root, but instead use MS_MOVE.
	I0805 23:46:24.813742   47941 command_runner.go:130] > # no_pivot = false
	I0805 23:46:24.813754   47941 command_runner.go:130] > # decryption_keys_path is the path where the keys required for
	I0805 23:46:24.813766   47941 command_runner.go:130] > # image decryption are stored. This option supports live configuration reload.
	I0805 23:46:24.813778   47941 command_runner.go:130] > # decryption_keys_path = "/etc/crio/keys/"
	I0805 23:46:24.813789   47941 command_runner.go:130] > # Path to the conmon binary, used for monitoring the OCI runtime.
	I0805 23:46:24.813799   47941 command_runner.go:130] > # Will be searched for using $PATH if empty.
	I0805 23:46:24.813808   47941 command_runner.go:130] > # This option is currently deprecated, and will be replaced with RuntimeHandler.MonitorEnv.
	I0805 23:46:24.813818   47941 command_runner.go:130] > conmon = "/usr/libexec/crio/conmon"
	I0805 23:46:24.813827   47941 command_runner.go:130] > # Cgroup setting for conmon
	I0805 23:46:24.813840   47941 command_runner.go:130] > # This option is currently deprecated, and will be replaced with RuntimeHandler.MonitorCgroup.
	I0805 23:46:24.813850   47941 command_runner.go:130] > conmon_cgroup = "pod"
	I0805 23:46:24.813860   47941 command_runner.go:130] > # Environment variable list for the conmon process, used for passing necessary
	I0805 23:46:24.813872   47941 command_runner.go:130] > # environment variables to conmon or the runtime.
	I0805 23:46:24.813889   47941 command_runner.go:130] > # This option is currently deprecated, and will be replaced with RuntimeHandler.MonitorEnv.
	I0805 23:46:24.813898   47941 command_runner.go:130] > conmon_env = [
	I0805 23:46:24.813908   47941 command_runner.go:130] > 	"PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin",
	I0805 23:46:24.813915   47941 command_runner.go:130] > ]
	I0805 23:46:24.813924   47941 command_runner.go:130] > # Additional environment variables to set for all the
	I0805 23:46:24.813935   47941 command_runner.go:130] > # containers. These are overridden if set in the
	I0805 23:46:24.813947   47941 command_runner.go:130] > # container image spec or in the container runtime configuration.
	I0805 23:46:24.813957   47941 command_runner.go:130] > # default_env = [
	I0805 23:46:24.813964   47941 command_runner.go:130] > # ]
	I0805 23:46:24.813977   47941 command_runner.go:130] > # If true, SELinux will be used for pod separation on the host.
	I0805 23:46:24.813990   47941 command_runner.go:130] > # This option is deprecated, and be interpreted from whether SELinux is enabled on the host in the future.
	I0805 23:46:24.813998   47941 command_runner.go:130] > # selinux = false
	I0805 23:46:24.814061   47941 command_runner.go:130] > # Path to the seccomp.json profile which is used as the default seccomp profile
	I0805 23:46:24.814084   47941 command_runner.go:130] > # for the runtime. If not specified, then the internal default seccomp profile
	I0805 23:46:24.814093   47941 command_runner.go:130] > # will be used. This option supports live configuration reload.
	I0805 23:46:24.814107   47941 command_runner.go:130] > # seccomp_profile = ""
	I0805 23:46:24.814119   47941 command_runner.go:130] > # Changes the meaning of an empty seccomp profile. By default
	I0805 23:46:24.814131   47941 command_runner.go:130] > # (and according to CRI spec), an empty profile means unconfined.
	I0805 23:46:24.814155   47941 command_runner.go:130] > # This option tells CRI-O to treat an empty profile as the default profile,
	I0805 23:46:24.814165   47941 command_runner.go:130] > # which might increase security.
	I0805 23:46:24.814174   47941 command_runner.go:130] > # This option is currently deprecated,
	I0805 23:46:24.814186   47941 command_runner.go:130] > # and will be replaced by the SeccompDefault FeatureGate in Kubernetes.
	I0805 23:46:24.814197   47941 command_runner.go:130] > seccomp_use_default_when_empty = false
	I0805 23:46:24.814211   47941 command_runner.go:130] > # Used to change the name of the default AppArmor profile of CRI-O. The default
	I0805 23:46:24.814224   47941 command_runner.go:130] > # profile name is "crio-default". This profile only takes effect if the user
	I0805 23:46:24.814237   47941 command_runner.go:130] > # does not specify a profile via the Kubernetes Pod's metadata annotation. If
	I0805 23:46:24.814250   47941 command_runner.go:130] > # the profile is set to "unconfined", then this equals to disabling AppArmor.
	I0805 23:46:24.814261   47941 command_runner.go:130] > # This option supports live configuration reload.
	I0805 23:46:24.814268   47941 command_runner.go:130] > # apparmor_profile = "crio-default"
	I0805 23:46:24.814275   47941 command_runner.go:130] > # Path to the blockio class configuration file for configuring
	I0805 23:46:24.814285   47941 command_runner.go:130] > # the cgroup blockio controller.
	I0805 23:46:24.814296   47941 command_runner.go:130] > # blockio_config_file = ""
	I0805 23:46:24.814310   47941 command_runner.go:130] > # Reload blockio-config-file and rescan blockio devices in the system before applying
	I0805 23:46:24.814320   47941 command_runner.go:130] > # blockio parameters.
	I0805 23:46:24.814329   47941 command_runner.go:130] > # blockio_reload = false
	I0805 23:46:24.814341   47941 command_runner.go:130] > # Used to change irqbalance service config file path which is used for configuring
	I0805 23:46:24.814350   47941 command_runner.go:130] > # irqbalance daemon.
	I0805 23:46:24.814361   47941 command_runner.go:130] > # irqbalance_config_file = "/etc/sysconfig/irqbalance"
	I0805 23:46:24.814374   47941 command_runner.go:130] > # irqbalance_config_restore_file allows to set a cpu mask CRI-O should
	I0805 23:46:24.814387   47941 command_runner.go:130] > # restore as irqbalance config at startup. Set to empty string to disable this flow entirely.
	I0805 23:46:24.814401   47941 command_runner.go:130] > # By default, CRI-O manages the irqbalance configuration to enable dynamic IRQ pinning.
	I0805 23:46:24.814416   47941 command_runner.go:130] > # irqbalance_config_restore_file = "/etc/sysconfig/orig_irq_banned_cpus"
	I0805 23:46:24.814429   47941 command_runner.go:130] > # Path to the RDT configuration file for configuring the resctrl pseudo-filesystem.
	I0805 23:46:24.814441   47941 command_runner.go:130] > # This option supports live configuration reload.
	I0805 23:46:24.814450   47941 command_runner.go:130] > # rdt_config_file = ""
	I0805 23:46:24.814461   47941 command_runner.go:130] > # Cgroup management implementation used for the runtime.
	I0805 23:46:24.814468   47941 command_runner.go:130] > cgroup_manager = "cgroupfs"
	I0805 23:46:24.814499   47941 command_runner.go:130] > # Specify whether the image pull must be performed in a separate cgroup.
	I0805 23:46:24.814510   47941 command_runner.go:130] > # separate_pull_cgroup = ""
	I0805 23:46:24.814521   47941 command_runner.go:130] > # List of default capabilities for containers. If it is empty or commented out,
	I0805 23:46:24.814534   47941 command_runner.go:130] > # only the capabilities defined in the containers json file by the user/kube
	I0805 23:46:24.814543   47941 command_runner.go:130] > # will be added.
	I0805 23:46:24.814552   47941 command_runner.go:130] > # default_capabilities = [
	I0805 23:46:24.814561   47941 command_runner.go:130] > # 	"CHOWN",
	I0805 23:46:24.814570   47941 command_runner.go:130] > # 	"DAC_OVERRIDE",
	I0805 23:46:24.814578   47941 command_runner.go:130] > # 	"FSETID",
	I0805 23:46:24.814581   47941 command_runner.go:130] > # 	"FOWNER",
	I0805 23:46:24.814585   47941 command_runner.go:130] > # 	"SETGID",
	I0805 23:46:24.814593   47941 command_runner.go:130] > # 	"SETUID",
	I0805 23:46:24.814601   47941 command_runner.go:130] > # 	"SETPCAP",
	I0805 23:46:24.814611   47941 command_runner.go:130] > # 	"NET_BIND_SERVICE",
	I0805 23:46:24.814619   47941 command_runner.go:130] > # 	"KILL",
	I0805 23:46:24.814627   47941 command_runner.go:130] > # ]
	I0805 23:46:24.814638   47941 command_runner.go:130] > # Add capabilities to the inheritable set, as well as the default group of permitted, bounding and effective.
	I0805 23:46:24.814651   47941 command_runner.go:130] > # If capabilities are expected to work for non-root users, this option should be set.
	I0805 23:46:24.814661   47941 command_runner.go:130] > # add_inheritable_capabilities = false
	I0805 23:46:24.814670   47941 command_runner.go:130] > # List of default sysctls. If it is empty or commented out, only the sysctls
	I0805 23:46:24.814679   47941 command_runner.go:130] > # defined in the container json file by the user/kube will be added.
	I0805 23:46:24.814688   47941 command_runner.go:130] > default_sysctls = [
	I0805 23:46:24.814696   47941 command_runner.go:130] > 	"net.ipv4.ip_unprivileged_port_start=0",
	I0805 23:46:24.814706   47941 command_runner.go:130] > ]
	I0805 23:46:24.814714   47941 command_runner.go:130] > # List of devices on the host that a
	I0805 23:46:24.814726   47941 command_runner.go:130] > # user can specify with the "io.kubernetes.cri-o.Devices" allowed annotation.
	I0805 23:46:24.814736   47941 command_runner.go:130] > # allowed_devices = [
	I0805 23:46:24.814745   47941 command_runner.go:130] > # 	"/dev/fuse",
	I0805 23:46:24.814750   47941 command_runner.go:130] > # ]
	I0805 23:46:24.814759   47941 command_runner.go:130] > # List of additional devices. specified as
	I0805 23:46:24.814767   47941 command_runner.go:130] > # "<device-on-host>:<device-on-container>:<permissions>", for example: "--device=/dev/sdc:/dev/xvdc:rwm".
	I0805 23:46:24.814778   47941 command_runner.go:130] > # If it is empty or commented out, only the devices
	I0805 23:46:24.814795   47941 command_runner.go:130] > # defined in the container json file by the user/kube will be added.
	I0805 23:46:24.814804   47941 command_runner.go:130] > # additional_devices = [
	I0805 23:46:24.814813   47941 command_runner.go:130] > # ]
	I0805 23:46:24.814825   47941 command_runner.go:130] > # List of directories to scan for CDI Spec files.
	I0805 23:46:24.814834   47941 command_runner.go:130] > # cdi_spec_dirs = [
	I0805 23:46:24.814842   47941 command_runner.go:130] > # 	"/etc/cdi",
	I0805 23:46:24.814851   47941 command_runner.go:130] > # 	"/var/run/cdi",
	I0805 23:46:24.814857   47941 command_runner.go:130] > # ]
	I0805 23:46:24.814863   47941 command_runner.go:130] > # Change the default behavior of setting container devices uid/gid from CRI's
	I0805 23:46:24.814880   47941 command_runner.go:130] > # SecurityContext (RunAsUser/RunAsGroup) instead of taking host's uid/gid.
	I0805 23:46:24.814890   47941 command_runner.go:130] > # Defaults to false.
	I0805 23:46:24.814900   47941 command_runner.go:130] > # device_ownership_from_security_context = false
	I0805 23:46:24.814913   47941 command_runner.go:130] > # Path to OCI hooks directories for automatically executed hooks. If one of the
	I0805 23:46:24.814925   47941 command_runner.go:130] > # directories does not exist, then CRI-O will automatically skip them.
	I0805 23:46:24.814933   47941 command_runner.go:130] > # hooks_dir = [
	I0805 23:46:24.814944   47941 command_runner.go:130] > # 	"/usr/share/containers/oci/hooks.d",
	I0805 23:46:24.814952   47941 command_runner.go:130] > # ]
	I0805 23:46:24.814962   47941 command_runner.go:130] > # Path to the file specifying the defaults mounts for each container. The
	I0805 23:46:24.814972   47941 command_runner.go:130] > # format of the config is /SRC:/DST, one mount per line. Notice that CRI-O reads
	I0805 23:46:24.814988   47941 command_runner.go:130] > # its default mounts from the following two files:
	I0805 23:46:24.814996   47941 command_runner.go:130] > #
	I0805 23:46:24.815012   47941 command_runner.go:130] > #   1) /etc/containers/mounts.conf (i.e., default_mounts_file): This is the
	I0805 23:46:24.815025   47941 command_runner.go:130] > #      override file, where users can either add in their own default mounts, or
	I0805 23:46:24.815036   47941 command_runner.go:130] > #      override the default mounts shipped with the package.
	I0805 23:46:24.815043   47941 command_runner.go:130] > #
	I0805 23:46:24.815063   47941 command_runner.go:130] > #   2) /usr/share/containers/mounts.conf: This is the default file read for
	I0805 23:46:24.815078   47941 command_runner.go:130] > #      mounts. If you want CRI-O to read from a different, specific mounts file,
	I0805 23:46:24.815092   47941 command_runner.go:130] > #      you can change the default_mounts_file. Note, if this is done, CRI-O will
	I0805 23:46:24.815103   47941 command_runner.go:130] > #      only add mounts it finds in this file.
	I0805 23:46:24.815111   47941 command_runner.go:130] > #
	I0805 23:46:24.815119   47941 command_runner.go:130] > # default_mounts_file = ""
	I0805 23:46:24.815130   47941 command_runner.go:130] > # Maximum number of processes allowed in a container.
	I0805 23:46:24.815142   47941 command_runner.go:130] > # This option is deprecated. The Kubelet flag '--pod-pids-limit' should be used instead.
	I0805 23:46:24.815149   47941 command_runner.go:130] > pids_limit = 1024
	I0805 23:46:24.815160   47941 command_runner.go:130] > # Maximum sized allowed for the container log file. Negative numbers indicate
	I0805 23:46:24.815173   47941 command_runner.go:130] > # that no size limit is imposed. If it is positive, it must be >= 8192 to
	I0805 23:46:24.815186   47941 command_runner.go:130] > # match/exceed conmon's read buffer. The file is truncated and re-opened so the
	I0805 23:46:24.815201   47941 command_runner.go:130] > # limit is never exceeded. This option is deprecated. The Kubelet flag '--container-log-max-size' should be used instead.
	I0805 23:46:24.815210   47941 command_runner.go:130] > # log_size_max = -1
	I0805 23:46:24.815224   47941 command_runner.go:130] > # Whether container output should be logged to journald in addition to the kubernetes log file
	I0805 23:46:24.815237   47941 command_runner.go:130] > # log_to_journald = false
	I0805 23:46:24.815245   47941 command_runner.go:130] > # Path to directory in which container exit files are written to by conmon.
	I0805 23:46:24.815256   47941 command_runner.go:130] > # container_exits_dir = "/var/run/crio/exits"
	I0805 23:46:24.815268   47941 command_runner.go:130] > # Path to directory for container attach sockets.
	I0805 23:46:24.815278   47941 command_runner.go:130] > # container_attach_socket_dir = "/var/run/crio"
	I0805 23:46:24.815290   47941 command_runner.go:130] > # The prefix to use for the source of the bind mounts.
	I0805 23:46:24.815300   47941 command_runner.go:130] > # bind_mount_prefix = ""
	I0805 23:46:24.815312   47941 command_runner.go:130] > # If set to true, all containers will run in read-only mode.
	I0805 23:46:24.815321   47941 command_runner.go:130] > # read_only = false
	I0805 23:46:24.815332   47941 command_runner.go:130] > # Changes the verbosity of the logs based on the level it is set to. Options
	I0805 23:46:24.815338   47941 command_runner.go:130] > # are fatal, panic, error, warn, info, debug and trace. This option supports
	I0805 23:46:24.815343   47941 command_runner.go:130] > # live configuration reload.
	I0805 23:46:24.815352   47941 command_runner.go:130] > # log_level = "info"
	I0805 23:46:24.815364   47941 command_runner.go:130] > # Filter the log messages by the provided regular expression.
	I0805 23:46:24.815375   47941 command_runner.go:130] > # This option supports live configuration reload.
	I0805 23:46:24.815385   47941 command_runner.go:130] > # log_filter = ""
	I0805 23:46:24.815396   47941 command_runner.go:130] > # The UID mappings for the user namespace of each container. A range is
	I0805 23:46:24.815411   47941 command_runner.go:130] > # specified in the form containerUID:HostUID:Size. Multiple ranges must be
	I0805 23:46:24.815420   47941 command_runner.go:130] > # separated by comma.
	I0805 23:46:24.815432   47941 command_runner.go:130] > # This option is deprecated, and will be replaced with Kubernetes user namespace support (KEP-127) in the future.
	I0805 23:46:24.815440   47941 command_runner.go:130] > # uid_mappings = ""
	I0805 23:46:24.815454   47941 command_runner.go:130] > # The GID mappings for the user namespace of each container. A range is
	I0805 23:46:24.815467   47941 command_runner.go:130] > # specified in the form containerGID:HostGID:Size. Multiple ranges must be
	I0805 23:46:24.815477   47941 command_runner.go:130] > # separated by comma.
	I0805 23:46:24.815501   47941 command_runner.go:130] > # This option is deprecated, and will be replaced with Kubernetes user namespace support (KEP-127) in the future.
	I0805 23:46:24.815511   47941 command_runner.go:130] > # gid_mappings = ""
	I0805 23:46:24.815523   47941 command_runner.go:130] > # If set, CRI-O will reject any attempt to map host UIDs below this value
	I0805 23:46:24.815533   47941 command_runner.go:130] > # into user namespaces.  A negative value indicates that no minimum is set,
	I0805 23:46:24.815543   47941 command_runner.go:130] > # so specifying mappings will only be allowed for pods that run as UID 0.
	I0805 23:46:24.815558   47941 command_runner.go:130] > # This option is deprecated, and will be replaced with Kubernetes user namespace support (KEP-127) in the future.
	I0805 23:46:24.815569   47941 command_runner.go:130] > # minimum_mappable_uid = -1
	I0805 23:46:24.815579   47941 command_runner.go:130] > # If set, CRI-O will reject any attempt to map host GIDs below this value
	I0805 23:46:24.815592   47941 command_runner.go:130] > # into user namespaces.  A negative value indicates that no minimum is set,
	I0805 23:46:24.815605   47941 command_runner.go:130] > # so specifying mappings will only be allowed for pods that run as UID 0.
	I0805 23:46:24.815620   47941 command_runner.go:130] > # This option is deprecated, and will be replaced with Kubernetes user namespace support (KEP-127) in the future.
	I0805 23:46:24.815633   47941 command_runner.go:130] > # minimum_mappable_gid = -1
	I0805 23:46:24.815642   47941 command_runner.go:130] > # The minimal amount of time in seconds to wait before issuing a timeout
	I0805 23:46:24.815653   47941 command_runner.go:130] > # regarding the proper termination of the container. The lowest possible
	I0805 23:46:24.815666   47941 command_runner.go:130] > # value is 30s, whereas lower values are not considered by CRI-O.
	I0805 23:46:24.815676   47941 command_runner.go:130] > # ctr_stop_timeout = 30
	I0805 23:46:24.815689   47941 command_runner.go:130] > # drop_infra_ctr determines whether CRI-O drops the infra container
	I0805 23:46:24.815701   47941 command_runner.go:130] > # when a pod does not have a private PID namespace, and does not use
	I0805 23:46:24.815713   47941 command_runner.go:130] > # a kernel separating runtime (like kata).
	I0805 23:46:24.815723   47941 command_runner.go:130] > # It requires manage_ns_lifecycle to be true.
	I0805 23:46:24.815732   47941 command_runner.go:130] > drop_infra_ctr = false
	I0805 23:46:24.815740   47941 command_runner.go:130] > # infra_ctr_cpuset determines what CPUs will be used to run infra containers.
	I0805 23:46:24.815753   47941 command_runner.go:130] > # You can use linux CPU list format to specify desired CPUs.
	I0805 23:46:24.815768   47941 command_runner.go:130] > # To get better isolation for guaranteed pods, set this parameter to be equal to kubelet reserved-cpus.
	I0805 23:46:24.815778   47941 command_runner.go:130] > # infra_ctr_cpuset = ""
	I0805 23:46:24.815792   47941 command_runner.go:130] > # shared_cpuset  determines the CPU set which is allowed to be shared between guaranteed containers,
	I0805 23:46:24.815804   47941 command_runner.go:130] > # regardless of, and in addition to, the exclusiveness of their CPUs.
	I0805 23:46:24.815815   47941 command_runner.go:130] > # This field is optional and would not be used if not specified.
	I0805 23:46:24.815827   47941 command_runner.go:130] > # You can specify CPUs in the Linux CPU list format.
	I0805 23:46:24.815833   47941 command_runner.go:130] > # shared_cpuset = ""
	I0805 23:46:24.815840   47941 command_runner.go:130] > # The directory where the state of the managed namespaces gets tracked.
	I0805 23:46:24.815851   47941 command_runner.go:130] > # Only used when manage_ns_lifecycle is true.
	I0805 23:46:24.815861   47941 command_runner.go:130] > # namespaces_dir = "/var/run"
	I0805 23:46:24.815884   47941 command_runner.go:130] > # pinns_path is the path to find the pinns binary, which is needed to manage namespace lifecycle
	I0805 23:46:24.815894   47941 command_runner.go:130] > pinns_path = "/usr/bin/pinns"
	I0805 23:46:24.815907   47941 command_runner.go:130] > # Globally enable/disable CRIU support which is necessary to
	I0805 23:46:24.815919   47941 command_runner.go:130] > # checkpoint and restore container or pods (even if CRIU is found in $PATH).
	I0805 23:46:24.815927   47941 command_runner.go:130] > # enable_criu_support = false
	I0805 23:46:24.815935   47941 command_runner.go:130] > # Enable/disable the generation of the container,
	I0805 23:46:24.815944   47941 command_runner.go:130] > # sandbox lifecycle events to be sent to the Kubelet to optimize the PLEG
	I0805 23:46:24.815954   47941 command_runner.go:130] > # enable_pod_events = false
	I0805 23:46:24.815965   47941 command_runner.go:130] > # default_runtime is the _name_ of the OCI runtime to be used as the default.
	I0805 23:46:24.815979   47941 command_runner.go:130] > # default_runtime is the _name_ of the OCI runtime to be used as the default.
	I0805 23:46:24.815990   47941 command_runner.go:130] > # The name is matched against the runtimes map below.
	I0805 23:46:24.816001   47941 command_runner.go:130] > # default_runtime = "runc"
	I0805 23:46:24.816012   47941 command_runner.go:130] > # A list of paths that, when absent from the host,
	I0805 23:46:24.816025   47941 command_runner.go:130] > # will cause a container creation to fail (as opposed to the current behavior being created as a directory).
	I0805 23:46:24.816039   47941 command_runner.go:130] > # This option is to protect from source locations whose existence as a directory could jeopardize the health of the node, and whose
	I0805 23:46:24.816056   47941 command_runner.go:130] > # creation as a file is not desired either.
	I0805 23:46:24.816073   47941 command_runner.go:130] > # An example is /etc/hostname, which will cause failures on reboot if it's created as a directory, but often doesn't exist because
	I0805 23:46:24.816083   47941 command_runner.go:130] > # the hostname is being managed dynamically.
	I0805 23:46:24.816094   47941 command_runner.go:130] > # absent_mount_sources_to_reject = [
	I0805 23:46:24.816099   47941 command_runner.go:130] > # ]
	I0805 23:46:24.816110   47941 command_runner.go:130] > # The "crio.runtime.runtimes" table defines a list of OCI compatible runtimes.
	I0805 23:46:24.816120   47941 command_runner.go:130] > # The runtime to use is picked based on the runtime handler provided by the CRI.
	I0805 23:46:24.816131   47941 command_runner.go:130] > # If no runtime handler is provided, the "default_runtime" will be used.
	I0805 23:46:24.816141   47941 command_runner.go:130] > # Each entry in the table should follow the format:
	I0805 23:46:24.816149   47941 command_runner.go:130] > #
	I0805 23:46:24.816157   47941 command_runner.go:130] > # [crio.runtime.runtimes.runtime-handler]
	I0805 23:46:24.816168   47941 command_runner.go:130] > # runtime_path = "/path/to/the/executable"
	I0805 23:46:24.816197   47941 command_runner.go:130] > # runtime_type = "oci"
	I0805 23:46:24.816207   47941 command_runner.go:130] > # runtime_root = "/path/to/the/root"
	I0805 23:46:24.816217   47941 command_runner.go:130] > # monitor_path = "/path/to/container/monitor"
	I0805 23:46:24.816224   47941 command_runner.go:130] > # monitor_cgroup = "/cgroup/path"
	I0805 23:46:24.816230   47941 command_runner.go:130] > # monitor_exec_cgroup = "/cgroup/path"
	I0805 23:46:24.816240   47941 command_runner.go:130] > # monitor_env = []
	I0805 23:46:24.816250   47941 command_runner.go:130] > # privileged_without_host_devices = false
	I0805 23:46:24.816257   47941 command_runner.go:130] > # allowed_annotations = []
	I0805 23:46:24.816270   47941 command_runner.go:130] > # platform_runtime_paths = { "os/arch" = "/path/to/binary" }
	I0805 23:46:24.816278   47941 command_runner.go:130] > # Where:
	I0805 23:46:24.816343   47941 command_runner.go:130] > # - runtime-handler: Name used to identify the runtime.
	I0805 23:46:24.816374   47941 command_runner.go:130] > # - runtime_path (optional, string): Absolute path to the runtime executable in
	I0805 23:46:24.816390   47941 command_runner.go:130] > #   the host filesystem. If omitted, the runtime-handler identifier should match
	I0805 23:46:24.816403   47941 command_runner.go:130] > #   the runtime executable name, and the runtime executable should be placed
	I0805 23:46:24.816412   47941 command_runner.go:130] > #   in $PATH.
	I0805 23:46:24.816424   47941 command_runner.go:130] > # - runtime_type (optional, string): Type of runtime, one of: "oci", "vm". If
	I0805 23:46:24.816435   47941 command_runner.go:130] > #   omitted, an "oci" runtime is assumed.
	I0805 23:46:24.816447   47941 command_runner.go:130] > # - runtime_root (optional, string): Root directory for storage of containers
	I0805 23:46:24.816457   47941 command_runner.go:130] > #   state.
	I0805 23:46:24.816471   47941 command_runner.go:130] > # - runtime_config_path (optional, string): the path for the runtime configuration
	I0805 23:46:24.816484   47941 command_runner.go:130] > #   file. This can only be used with when using the VM runtime_type.
	I0805 23:46:24.816497   47941 command_runner.go:130] > # - privileged_without_host_devices (optional, bool): an option for restricting
	I0805 23:46:24.816509   47941 command_runner.go:130] > #   host devices from being passed to privileged containers.
	I0805 23:46:24.816523   47941 command_runner.go:130] > # - allowed_annotations (optional, array of strings): an option for specifying
	I0805 23:46:24.816536   47941 command_runner.go:130] > #   a list of experimental annotations that this runtime handler is allowed to process.
	I0805 23:46:24.816553   47941 command_runner.go:130] > #   The currently recognized values are:
	I0805 23:46:24.816567   47941 command_runner.go:130] > #   "io.kubernetes.cri-o.userns-mode" for configuring a user namespace for the pod.
	I0805 23:46:24.816582   47941 command_runner.go:130] > #   "io.kubernetes.cri-o.cgroup2-mount-hierarchy-rw" for mounting cgroups writably when set to "true".
	I0805 23:46:24.816595   47941 command_runner.go:130] > #   "io.kubernetes.cri-o.Devices" for configuring devices for the pod.
	I0805 23:46:24.816607   47941 command_runner.go:130] > #   "io.kubernetes.cri-o.ShmSize" for configuring the size of /dev/shm.
	I0805 23:46:24.816621   47941 command_runner.go:130] > #   "io.kubernetes.cri-o.UnifiedCgroup.$CTR_NAME" for configuring the cgroup v2 unified block for a container.
	I0805 23:46:24.816634   47941 command_runner.go:130] > #   "io.containers.trace-syscall" for tracing syscalls via the OCI seccomp BPF hook.
	I0805 23:46:24.816645   47941 command_runner.go:130] > #   "io.kubernetes.cri-o.seccompNotifierAction" for enabling the seccomp notifier feature.
	I0805 23:46:24.816655   47941 command_runner.go:130] > #   "io.kubernetes.cri-o.umask" for setting the umask for container init process.
	I0805 23:46:24.816667   47941 command_runner.go:130] > #   "io.kubernetes.cri.rdt-class" for setting the RDT class of a container
	I0805 23:46:24.816680   47941 command_runner.go:130] > # - monitor_path (optional, string): The path of the monitor binary. Replaces
	I0805 23:46:24.816691   47941 command_runner.go:130] > #   deprecated option "conmon".
	I0805 23:46:24.816705   47941 command_runner.go:130] > # - monitor_cgroup (optional, string): The cgroup the container monitor process will be put in.
	I0805 23:46:24.816715   47941 command_runner.go:130] > #   Replaces deprecated option "conmon_cgroup".
	I0805 23:46:24.816729   47941 command_runner.go:130] > # - monitor_exec_cgroup (optional, string): If set to "container", indicates exec probes
	I0805 23:46:24.816739   47941 command_runner.go:130] > #   should be moved to the container's cgroup
	I0805 23:46:24.816749   47941 command_runner.go:130] > # - monitor_env (optional, array of strings): Environment variables to pass to the montior.
	I0805 23:46:24.816759   47941 command_runner.go:130] > #   Replaces deprecated option "conmon_env".
	I0805 23:46:24.816840   47941 command_runner.go:130] > # - platform_runtime_paths (optional, map): A mapping of platforms to the corresponding
	I0805 23:46:24.816850   47941 command_runner.go:130] > #   runtime executable paths for the runtime handler.
	I0805 23:46:24.816855   47941 command_runner.go:130] > #
	I0805 23:46:24.816865   47941 command_runner.go:130] > # Using the seccomp notifier feature:
	I0805 23:46:24.816873   47941 command_runner.go:130] > #
	I0805 23:46:24.816883   47941 command_runner.go:130] > # This feature can help you to debug seccomp related issues, for example if
	I0805 23:46:24.816896   47941 command_runner.go:130] > # blocked syscalls (permission denied errors) have negative impact on the workload.
	I0805 23:46:24.816904   47941 command_runner.go:130] > #
	I0805 23:46:24.816913   47941 command_runner.go:130] > # To be able to use this feature, configure a runtime which has the annotation
	I0805 23:46:24.816926   47941 command_runner.go:130] > # "io.kubernetes.cri-o.seccompNotifierAction" in the allowed_annotations array.
	I0805 23:46:24.816932   47941 command_runner.go:130] > #
	I0805 23:46:24.816940   47941 command_runner.go:130] > # It also requires at least runc 1.1.0 or crun 0.19 which support the notifier
	I0805 23:46:24.816949   47941 command_runner.go:130] > # feature.
	I0805 23:46:24.816957   47941 command_runner.go:130] > #
	I0805 23:46:24.816968   47941 command_runner.go:130] > # If everything is setup, CRI-O will modify chosen seccomp profiles for
	I0805 23:46:24.816980   47941 command_runner.go:130] > # containers if the annotation "io.kubernetes.cri-o.seccompNotifierAction" is
	I0805 23:46:24.816993   47941 command_runner.go:130] > # set on the Pod sandbox. CRI-O will then get notified if a container is using
	I0805 23:46:24.817009   47941 command_runner.go:130] > # a blocked syscall and then terminate the workload after a timeout of 5
	I0805 23:46:24.817021   47941 command_runner.go:130] > # seconds if the value of "io.kubernetes.cri-o.seccompNotifierAction=stop".
	I0805 23:46:24.817028   47941 command_runner.go:130] > #
	I0805 23:46:24.817038   47941 command_runner.go:130] > # This also means that multiple syscalls can be captured during that period,
	I0805 23:46:24.817051   47941 command_runner.go:130] > # while the timeout will get reset once a new syscall has been discovered.
	I0805 23:46:24.817059   47941 command_runner.go:130] > #
	I0805 23:46:24.817069   47941 command_runner.go:130] > # This also means that the Pods "restartPolicy" has to be set to "Never",
	I0805 23:46:24.817082   47941 command_runner.go:130] > # otherwise the kubelet will restart the container immediately.
	I0805 23:46:24.817090   47941 command_runner.go:130] > #
	I0805 23:46:24.817104   47941 command_runner.go:130] > # Please be aware that CRI-O is not able to get notified if a syscall gets
	I0805 23:46:24.817121   47941 command_runner.go:130] > # blocked based on the seccomp defaultAction, which is a general runtime
	I0805 23:46:24.817128   47941 command_runner.go:130] > # limitation.
	I0805 23:46:24.817135   47941 command_runner.go:130] > [crio.runtime.runtimes.runc]
	I0805 23:46:24.817145   47941 command_runner.go:130] > runtime_path = "/usr/bin/runc"
	I0805 23:46:24.817154   47941 command_runner.go:130] > runtime_type = "oci"
	I0805 23:46:24.817161   47941 command_runner.go:130] > runtime_root = "/run/runc"
	I0805 23:46:24.817172   47941 command_runner.go:130] > runtime_config_path = ""
	I0805 23:46:24.817182   47941 command_runner.go:130] > monitor_path = "/usr/libexec/crio/conmon"
	I0805 23:46:24.817191   47941 command_runner.go:130] > monitor_cgroup = "pod"
	I0805 23:46:24.817201   47941 command_runner.go:130] > monitor_exec_cgroup = ""
	I0805 23:46:24.817210   47941 command_runner.go:130] > monitor_env = [
	I0805 23:46:24.817223   47941 command_runner.go:130] > 	"PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin",
	I0805 23:46:24.817228   47941 command_runner.go:130] > ]
	I0805 23:46:24.817235   47941 command_runner.go:130] > privileged_without_host_devices = false
	I0805 23:46:24.817249   47941 command_runner.go:130] > # The workloads table defines ways to customize containers with different resources
	I0805 23:46:24.817261   47941 command_runner.go:130] > # that work based on annotations, rather than the CRI.
	I0805 23:46:24.817274   47941 command_runner.go:130] > # Note, the behavior of this table is EXPERIMENTAL and may change at any time.
	I0805 23:46:24.817289   47941 command_runner.go:130] > # Each workload, has a name, activation_annotation, annotation_prefix and set of resources it supports mutating.
	I0805 23:46:24.817303   47941 command_runner.go:130] > # The currently supported resources are "cpu" (to configure the cpu shares) and "cpuset" to configure the cpuset.
	I0805 23:46:24.817314   47941 command_runner.go:130] > # Each resource can have a default value specified, or be empty.
	I0805 23:46:24.817331   47941 command_runner.go:130] > # For a container to opt-into this workload, the pod should be configured with the annotation $activation_annotation (key only, value is ignored).
	I0805 23:46:24.817344   47941 command_runner.go:130] > # To customize per-container, an annotation of the form $annotation_prefix.$resource/$ctrName = "value" can be specified
	I0805 23:46:24.817354   47941 command_runner.go:130] > # signifying for that resource type to override the default value.
	I0805 23:46:24.817365   47941 command_runner.go:130] > # If the annotation_prefix is not present, every container in the pod will be given the default values.
	I0805 23:46:24.817374   47941 command_runner.go:130] > # Example:
	I0805 23:46:24.817381   47941 command_runner.go:130] > # [crio.runtime.workloads.workload-type]
	I0805 23:46:24.817389   47941 command_runner.go:130] > # activation_annotation = "io.crio/workload"
	I0805 23:46:24.817398   47941 command_runner.go:130] > # annotation_prefix = "io.crio.workload-type"
	I0805 23:46:24.817409   47941 command_runner.go:130] > # [crio.runtime.workloads.workload-type.resources]
	I0805 23:46:24.817413   47941 command_runner.go:130] > # cpuset = 0
	I0805 23:46:24.817417   47941 command_runner.go:130] > # cpushares = "0-1"
	I0805 23:46:24.817420   47941 command_runner.go:130] > # Where:
	I0805 23:46:24.817427   47941 command_runner.go:130] > # The workload name is workload-type.
	I0805 23:46:24.817439   47941 command_runner.go:130] > # To specify, the pod must have the "io.crio.workload" annotation (this is a precise string match).
	I0805 23:46:24.817448   47941 command_runner.go:130] > # This workload supports setting cpuset and cpu resources.
	I0805 23:46:24.817457   47941 command_runner.go:130] > # annotation_prefix is used to customize the different resources.
	I0805 23:46:24.817470   47941 command_runner.go:130] > # To configure the cpu shares a container gets in the example above, the pod would have to have the following annotation:
	I0805 23:46:24.817479   47941 command_runner.go:130] > # "io.crio.workload-type/$container_name = {"cpushares": "value"}"
	I0805 23:46:24.817491   47941 command_runner.go:130] > # hostnetwork_disable_selinux determines whether
	I0805 23:46:24.817501   47941 command_runner.go:130] > # SELinux should be disabled within a pod when it is running in the host network namespace
	I0805 23:46:24.817509   47941 command_runner.go:130] > # Default value is set to true
	I0805 23:46:24.817520   47941 command_runner.go:130] > # hostnetwork_disable_selinux = true
	I0805 23:46:24.817532   47941 command_runner.go:130] > # disable_hostport_mapping determines whether to enable/disable
	I0805 23:46:24.817543   47941 command_runner.go:130] > # the container hostport mapping in CRI-O.
	I0805 23:46:24.817554   47941 command_runner.go:130] > # Default value is set to 'false'
	I0805 23:46:24.817564   47941 command_runner.go:130] > # disable_hostport_mapping = false
	I0805 23:46:24.817576   47941 command_runner.go:130] > # The crio.image table contains settings pertaining to the management of OCI images.
	I0805 23:46:24.817584   47941 command_runner.go:130] > #
	I0805 23:46:24.817596   47941 command_runner.go:130] > # CRI-O reads its configured registries defaults from the system wide
	I0805 23:46:24.817605   47941 command_runner.go:130] > # containers-registries.conf(5) located in /etc/containers/registries.conf. If
	I0805 23:46:24.817616   47941 command_runner.go:130] > # you want to modify just CRI-O, you can change the registries configuration in
	I0805 23:46:24.817629   47941 command_runner.go:130] > # this file. Otherwise, leave insecure_registries and registries commented out to
	I0805 23:46:24.817641   47941 command_runner.go:130] > # use the system's defaults from /etc/containers/registries.conf.
	I0805 23:46:24.817650   47941 command_runner.go:130] > [crio.image]
	I0805 23:46:24.817662   47941 command_runner.go:130] > # Default transport for pulling images from a remote container storage.
	I0805 23:46:24.817671   47941 command_runner.go:130] > # default_transport = "docker://"
	I0805 23:46:24.817685   47941 command_runner.go:130] > # The path to a file containing credentials necessary for pulling images from
	I0805 23:46:24.817695   47941 command_runner.go:130] > # secure registries. The file is similar to that of /var/lib/kubelet/config.json
	I0805 23:46:24.817702   47941 command_runner.go:130] > # global_auth_file = ""
	I0805 23:46:24.817709   47941 command_runner.go:130] > # The image used to instantiate infra containers.
	I0805 23:46:24.817721   47941 command_runner.go:130] > # This option supports live configuration reload.
	I0805 23:46:24.817732   47941 command_runner.go:130] > # pause_image = "registry.k8s.io/pause:3.9"
	I0805 23:46:24.817745   47941 command_runner.go:130] > # The path to a file containing credentials specific for pulling the pause_image from
	I0805 23:46:24.817757   47941 command_runner.go:130] > # above. The file is similar to that of /var/lib/kubelet/config.json
	I0805 23:46:24.817769   47941 command_runner.go:130] > # This option supports live configuration reload.
	I0805 23:46:24.817780   47941 command_runner.go:130] > # pause_image_auth_file = ""
	I0805 23:46:24.817794   47941 command_runner.go:130] > # The command to run to have a container stay in the paused state.
	I0805 23:46:24.817807   47941 command_runner.go:130] > # When explicitly set to "", it will fallback to the entrypoint and command
	I0805 23:46:24.817821   47941 command_runner.go:130] > # specified in the pause image. When commented out, it will fallback to the
	I0805 23:46:24.817832   47941 command_runner.go:130] > # default: "/pause". This option supports live configuration reload.
	I0805 23:46:24.817842   47941 command_runner.go:130] > # pause_command = "/pause"
	I0805 23:46:24.817854   47941 command_runner.go:130] > # List of images to be excluded from the kubelet's garbage collection.
	I0805 23:46:24.817866   47941 command_runner.go:130] > # It allows specifying image names using either exact, glob, or keyword
	I0805 23:46:24.817874   47941 command_runner.go:130] > # patterns. Exact matches must match the entire name, glob matches can
	I0805 23:46:24.817893   47941 command_runner.go:130] > # have a wildcard * at the end, and keyword matches can have wildcards
	I0805 23:46:24.817905   47941 command_runner.go:130] > # on both ends. By default, this list includes the "pause" image if
	I0805 23:46:24.817919   47941 command_runner.go:130] > # configured by the user, which is used as a placeholder in Kubernetes pods.
	I0805 23:46:24.817929   47941 command_runner.go:130] > # pinned_images = [
	I0805 23:46:24.817937   47941 command_runner.go:130] > # ]
	I0805 23:46:24.817949   47941 command_runner.go:130] > # Path to the file which decides what sort of policy we use when deciding
	I0805 23:46:24.817962   47941 command_runner.go:130] > # whether or not to trust an image that we've pulled. It is not recommended that
	I0805 23:46:24.817972   47941 command_runner.go:130] > # this option be used, as the default behavior of using the system-wide default
	I0805 23:46:24.817982   47941 command_runner.go:130] > # policy (i.e., /etc/containers/policy.json) is most often preferred. Please
	I0805 23:46:24.817993   47941 command_runner.go:130] > # refer to containers-policy.json(5) for more details.
	I0805 23:46:24.818003   47941 command_runner.go:130] > # signature_policy = ""
	I0805 23:46:24.818018   47941 command_runner.go:130] > # Root path for pod namespace-separated signature policies.
	I0805 23:46:24.818032   47941 command_runner.go:130] > # The final policy to be used on image pull will be <SIGNATURE_POLICY_DIR>/<NAMESPACE>.json.
	I0805 23:46:24.818045   47941 command_runner.go:130] > # If no pod namespace is being provided on image pull (via the sandbox config),
	I0805 23:46:24.818057   47941 command_runner.go:130] > # or the concatenated path is non existent, then the signature_policy or system
	I0805 23:46:24.818069   47941 command_runner.go:130] > # wide policy will be used as fallback. Must be an absolute path.
	I0805 23:46:24.818076   47941 command_runner.go:130] > # signature_policy_dir = "/etc/crio/policies"
	I0805 23:46:24.818085   47941 command_runner.go:130] > # List of registries to skip TLS verification for pulling images. Please
	I0805 23:46:24.818099   47941 command_runner.go:130] > # consider configuring the registries via /etc/containers/registries.conf before
	I0805 23:46:24.818110   47941 command_runner.go:130] > # changing them here.
	I0805 23:46:24.818120   47941 command_runner.go:130] > # insecure_registries = [
	I0805 23:46:24.818128   47941 command_runner.go:130] > # ]
	I0805 23:46:24.818141   47941 command_runner.go:130] > # Controls how image volumes are handled. The valid values are mkdir, bind and
	I0805 23:46:24.818152   47941 command_runner.go:130] > # ignore; the latter will ignore volumes entirely.
	I0805 23:46:24.818160   47941 command_runner.go:130] > # image_volumes = "mkdir"
	I0805 23:46:24.818165   47941 command_runner.go:130] > # Temporary directory to use for storing big files
	I0805 23:46:24.818174   47941 command_runner.go:130] > # big_files_temporary_dir = ""
	I0805 23:46:24.818195   47941 command_runner.go:130] > # The crio.network table containers settings pertaining to the management of
	I0805 23:46:24.818205   47941 command_runner.go:130] > # CNI plugins.
	I0805 23:46:24.818213   47941 command_runner.go:130] > [crio.network]
	I0805 23:46:24.818225   47941 command_runner.go:130] > # The default CNI network name to be selected. If not set or "", then
	I0805 23:46:24.818237   47941 command_runner.go:130] > # CRI-O will pick-up the first one found in network_dir.
	I0805 23:46:24.818246   47941 command_runner.go:130] > # cni_default_network = ""
	I0805 23:46:24.818254   47941 command_runner.go:130] > # Path to the directory where CNI configuration files are located.
	I0805 23:46:24.818263   47941 command_runner.go:130] > # network_dir = "/etc/cni/net.d/"
	I0805 23:46:24.818275   47941 command_runner.go:130] > # Paths to directories where CNI plugin binaries are located.
	I0805 23:46:24.818285   47941 command_runner.go:130] > # plugin_dirs = [
	I0805 23:46:24.818294   47941 command_runner.go:130] > # 	"/opt/cni/bin/",
	I0805 23:46:24.818303   47941 command_runner.go:130] > # ]
	I0805 23:46:24.818313   47941 command_runner.go:130] > # A necessary configuration for Prometheus based metrics retrieval
	I0805 23:46:24.818322   47941 command_runner.go:130] > [crio.metrics]
	I0805 23:46:24.818332   47941 command_runner.go:130] > # Globally enable or disable metrics support.
	I0805 23:46:24.818339   47941 command_runner.go:130] > enable_metrics = true
	I0805 23:46:24.818344   47941 command_runner.go:130] > # Specify enabled metrics collectors.
	I0805 23:46:24.818354   47941 command_runner.go:130] > # Per default all metrics are enabled.
	I0805 23:46:24.818366   47941 command_runner.go:130] > # It is possible, to prefix the metrics with "container_runtime_" and "crio_".
	I0805 23:46:24.818380   47941 command_runner.go:130] > # For example, the metrics collector "operations" would be treated in the same
	I0805 23:46:24.818392   47941 command_runner.go:130] > # way as "crio_operations" and "container_runtime_crio_operations".
	I0805 23:46:24.818401   47941 command_runner.go:130] > # metrics_collectors = [
	I0805 23:46:24.818411   47941 command_runner.go:130] > # 	"operations",
	I0805 23:46:24.818422   47941 command_runner.go:130] > # 	"operations_latency_microseconds_total",
	I0805 23:46:24.818431   47941 command_runner.go:130] > # 	"operations_latency_microseconds",
	I0805 23:46:24.818436   47941 command_runner.go:130] > # 	"operations_errors",
	I0805 23:46:24.818440   47941 command_runner.go:130] > # 	"image_pulls_by_digest",
	I0805 23:46:24.818450   47941 command_runner.go:130] > # 	"image_pulls_by_name",
	I0805 23:46:24.818461   47941 command_runner.go:130] > # 	"image_pulls_by_name_skipped",
	I0805 23:46:24.818472   47941 command_runner.go:130] > # 	"image_pulls_failures",
	I0805 23:46:24.818483   47941 command_runner.go:130] > # 	"image_pulls_successes",
	I0805 23:46:24.818492   47941 command_runner.go:130] > # 	"image_pulls_layer_size",
	I0805 23:46:24.818502   47941 command_runner.go:130] > # 	"image_layer_reuse",
	I0805 23:46:24.818512   47941 command_runner.go:130] > # 	"containers_events_dropped_total",
	I0805 23:46:24.818521   47941 command_runner.go:130] > # 	"containers_oom_total",
	I0805 23:46:24.818529   47941 command_runner.go:130] > # 	"containers_oom",
	I0805 23:46:24.818533   47941 command_runner.go:130] > # 	"processes_defunct",
	I0805 23:46:24.818539   47941 command_runner.go:130] > # 	"operations_total",
	I0805 23:46:24.818548   47941 command_runner.go:130] > # 	"operations_latency_seconds",
	I0805 23:46:24.818559   47941 command_runner.go:130] > # 	"operations_latency_seconds_total",
	I0805 23:46:24.818567   47941 command_runner.go:130] > # 	"operations_errors_total",
	I0805 23:46:24.818577   47941 command_runner.go:130] > # 	"image_pulls_bytes_total",
	I0805 23:46:24.818587   47941 command_runner.go:130] > # 	"image_pulls_skipped_bytes_total",
	I0805 23:46:24.818596   47941 command_runner.go:130] > # 	"image_pulls_failure_total",
	I0805 23:46:24.818607   47941 command_runner.go:130] > # 	"image_pulls_success_total",
	I0805 23:46:24.818619   47941 command_runner.go:130] > # 	"image_layer_reuse_total",
	I0805 23:46:24.818627   47941 command_runner.go:130] > # 	"containers_oom_count_total",
	I0805 23:46:24.818632   47941 command_runner.go:130] > # 	"containers_seccomp_notifier_count_total",
	I0805 23:46:24.818641   47941 command_runner.go:130] > # 	"resources_stalled_at_stage",
	I0805 23:46:24.818650   47941 command_runner.go:130] > # ]
	I0805 23:46:24.818662   47941 command_runner.go:130] > # The port on which the metrics server will listen.
	I0805 23:46:24.818671   47941 command_runner.go:130] > # metrics_port = 9090
	I0805 23:46:24.818683   47941 command_runner.go:130] > # Local socket path to bind the metrics server to
	I0805 23:46:24.818692   47941 command_runner.go:130] > # metrics_socket = ""
	I0805 23:46:24.818703   47941 command_runner.go:130] > # The certificate for the secure metrics server.
	I0805 23:46:24.818714   47941 command_runner.go:130] > # If the certificate is not available on disk, then CRI-O will generate a
	I0805 23:46:24.818724   47941 command_runner.go:130] > # self-signed one. CRI-O also watches for changes of this path and reloads the
	I0805 23:46:24.818734   47941 command_runner.go:130] > # certificate on any modification event.
	I0805 23:46:24.818744   47941 command_runner.go:130] > # metrics_cert = ""
	I0805 23:46:24.818752   47941 command_runner.go:130] > # The certificate key for the secure metrics server.
	I0805 23:46:24.818764   47941 command_runner.go:130] > # Behaves in the same way as the metrics_cert.
	I0805 23:46:24.818773   47941 command_runner.go:130] > # metrics_key = ""
	I0805 23:46:24.818791   47941 command_runner.go:130] > # A necessary configuration for OpenTelemetry trace data exporting
	I0805 23:46:24.818801   47941 command_runner.go:130] > [crio.tracing]
	I0805 23:46:24.818811   47941 command_runner.go:130] > # Globally enable or disable exporting OpenTelemetry traces.
	I0805 23:46:24.818819   47941 command_runner.go:130] > # enable_tracing = false
	I0805 23:46:24.818827   47941 command_runner.go:130] > # Address on which the gRPC trace collector listens on.
	I0805 23:46:24.818838   47941 command_runner.go:130] > # tracing_endpoint = "0.0.0.0:4317"
	I0805 23:46:24.818853   47941 command_runner.go:130] > # Number of samples to collect per million spans. Set to 1000000 to always sample.
	I0805 23:46:24.818864   47941 command_runner.go:130] > # tracing_sampling_rate_per_million = 0
	I0805 23:46:24.818874   47941 command_runner.go:130] > # CRI-O NRI configuration.
	I0805 23:46:24.818883   47941 command_runner.go:130] > [crio.nri]
	I0805 23:46:24.818893   47941 command_runner.go:130] > # Globally enable or disable NRI.
	I0805 23:46:24.818900   47941 command_runner.go:130] > # enable_nri = false
	I0805 23:46:24.818904   47941 command_runner.go:130] > # NRI socket to listen on.
	I0805 23:46:24.818913   47941 command_runner.go:130] > # nri_listen = "/var/run/nri/nri.sock"
	I0805 23:46:24.818923   47941 command_runner.go:130] > # NRI plugin directory to use.
	I0805 23:46:24.818935   47941 command_runner.go:130] > # nri_plugin_dir = "/opt/nri/plugins"
	I0805 23:46:24.818945   47941 command_runner.go:130] > # NRI plugin configuration directory to use.
	I0805 23:46:24.818957   47941 command_runner.go:130] > # nri_plugin_config_dir = "/etc/nri/conf.d"
	I0805 23:46:24.818968   47941 command_runner.go:130] > # Disable connections from externally launched NRI plugins.
	I0805 23:46:24.818978   47941 command_runner.go:130] > # nri_disable_connections = false
	I0805 23:46:24.818987   47941 command_runner.go:130] > # Timeout for a plugin to register itself with NRI.
	I0805 23:46:24.818996   47941 command_runner.go:130] > # nri_plugin_registration_timeout = "5s"
	I0805 23:46:24.819008   47941 command_runner.go:130] > # Timeout for a plugin to handle an NRI request.
	I0805 23:46:24.819017   47941 command_runner.go:130] > # nri_plugin_request_timeout = "2s"
	I0805 23:46:24.819028   47941 command_runner.go:130] > # Necessary information pertaining to container and pod stats reporting.
	I0805 23:46:24.819037   47941 command_runner.go:130] > [crio.stats]
	I0805 23:46:24.819064   47941 command_runner.go:130] > # The number of seconds between collecting pod and container stats.
	I0805 23:46:24.819077   47941 command_runner.go:130] > # If set to 0, the stats are collected on-demand instead.
	I0805 23:46:24.819087   47941 command_runner.go:130] > # stats_collection_period = 0
	I0805 23:46:24.819209   47941 cni.go:84] Creating CNI manager for ""
	I0805 23:46:24.819221   47941 cni.go:136] multinode detected (3 nodes found), recommending kindnet
	I0805 23:46:24.819231   47941 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0805 23:46:24.819258   47941 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.10 APIServerPort:8443 KubernetesVersion:v1.30.3 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:multinode-342677 NodeName:multinode-342677 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.10"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.10 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/et
c/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0805 23:46:24.819422   47941 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.10
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "multinode-342677"
	  kubeletExtraArgs:
	    node-ip: 192.168.39.10
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.10"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.30.3
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0805 23:46:24.819489   47941 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.30.3
	I0805 23:46:24.830409   47941 command_runner.go:130] > kubeadm
	I0805 23:46:24.830431   47941 command_runner.go:130] > kubectl
	I0805 23:46:24.830439   47941 command_runner.go:130] > kubelet
	I0805 23:46:24.830458   47941 binaries.go:44] Found k8s binaries, skipping transfer
	I0805 23:46:24.830516   47941 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0805 23:46:24.840701   47941 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (315 bytes)
	I0805 23:46:24.858099   47941 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0805 23:46:24.875301   47941 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2157 bytes)
	I0805 23:46:24.892377   47941 ssh_runner.go:195] Run: grep 192.168.39.10	control-plane.minikube.internal$ /etc/hosts
	I0805 23:46:24.896281   47941 command_runner.go:130] > 192.168.39.10	control-plane.minikube.internal
	I0805 23:46:24.896358   47941 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0805 23:46:25.039844   47941 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0805 23:46:25.054795   47941 certs.go:68] Setting up /home/jenkins/minikube-integration/19373-9606/.minikube/profiles/multinode-342677 for IP: 192.168.39.10
	I0805 23:46:25.054825   47941 certs.go:194] generating shared ca certs ...
	I0805 23:46:25.054861   47941 certs.go:226] acquiring lock for ca certs: {Name:mkf35a042c1656d191f542eee7fa087aad4d29d8 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0805 23:46:25.055026   47941 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19373-9606/.minikube/ca.key
	I0805 23:46:25.055129   47941 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19373-9606/.minikube/proxy-client-ca.key
	I0805 23:46:25.055142   47941 certs.go:256] generating profile certs ...
	I0805 23:46:25.055227   47941 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19373-9606/.minikube/profiles/multinode-342677/client.key
	I0805 23:46:25.055280   47941 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/19373-9606/.minikube/profiles/multinode-342677/apiserver.key.35d08239
	I0805 23:46:25.055323   47941 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19373-9606/.minikube/profiles/multinode-342677/proxy-client.key
	I0805 23:46:25.055333   47941 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19373-9606/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I0805 23:46:25.055347   47941 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19373-9606/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I0805 23:46:25.055359   47941 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19373-9606/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0805 23:46:25.055371   47941 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19373-9606/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0805 23:46:25.055386   47941 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19373-9606/.minikube/profiles/multinode-342677/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0805 23:46:25.055399   47941 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19373-9606/.minikube/profiles/multinode-342677/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0805 23:46:25.055411   47941 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19373-9606/.minikube/profiles/multinode-342677/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0805 23:46:25.055423   47941 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19373-9606/.minikube/profiles/multinode-342677/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0805 23:46:25.055482   47941 certs.go:484] found cert: /home/jenkins/minikube-integration/19373-9606/.minikube/certs/16792.pem (1338 bytes)
	W0805 23:46:25.055509   47941 certs.go:480] ignoring /home/jenkins/minikube-integration/19373-9606/.minikube/certs/16792_empty.pem, impossibly tiny 0 bytes
	I0805 23:46:25.055518   47941 certs.go:484] found cert: /home/jenkins/minikube-integration/19373-9606/.minikube/certs/ca-key.pem (1679 bytes)
	I0805 23:46:25.055538   47941 certs.go:484] found cert: /home/jenkins/minikube-integration/19373-9606/.minikube/certs/ca.pem (1082 bytes)
	I0805 23:46:25.055560   47941 certs.go:484] found cert: /home/jenkins/minikube-integration/19373-9606/.minikube/certs/cert.pem (1123 bytes)
	I0805 23:46:25.055582   47941 certs.go:484] found cert: /home/jenkins/minikube-integration/19373-9606/.minikube/certs/key.pem (1679 bytes)
	I0805 23:46:25.055618   47941 certs.go:484] found cert: /home/jenkins/minikube-integration/19373-9606/.minikube/files/etc/ssl/certs/167922.pem (1708 bytes)
	I0805 23:46:25.055643   47941 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19373-9606/.minikube/certs/16792.pem -> /usr/share/ca-certificates/16792.pem
	I0805 23:46:25.055656   47941 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19373-9606/.minikube/files/etc/ssl/certs/167922.pem -> /usr/share/ca-certificates/167922.pem
	I0805 23:46:25.055668   47941 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19373-9606/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0805 23:46:25.056293   47941 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19373-9606/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0805 23:46:25.081605   47941 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19373-9606/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0805 23:46:25.105594   47941 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19373-9606/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0805 23:46:25.129306   47941 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19373-9606/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0805 23:46:25.155297   47941 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19373-9606/.minikube/profiles/multinode-342677/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I0805 23:46:25.179024   47941 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19373-9606/.minikube/profiles/multinode-342677/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0805 23:46:25.202720   47941 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19373-9606/.minikube/profiles/multinode-342677/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0805 23:46:25.226906   47941 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19373-9606/.minikube/profiles/multinode-342677/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0805 23:46:25.251500   47941 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19373-9606/.minikube/certs/16792.pem --> /usr/share/ca-certificates/16792.pem (1338 bytes)
	I0805 23:46:25.275954   47941 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19373-9606/.minikube/files/etc/ssl/certs/167922.pem --> /usr/share/ca-certificates/167922.pem (1708 bytes)
	I0805 23:46:25.300838   47941 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19373-9606/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0805 23:46:25.326582   47941 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0805 23:46:25.344163   47941 ssh_runner.go:195] Run: openssl version
	I0805 23:46:25.349800   47941 command_runner.go:130] > OpenSSL 1.1.1w  11 Sep 2023
	I0805 23:46:25.349999   47941 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/16792.pem && ln -fs /usr/share/ca-certificates/16792.pem /etc/ssl/certs/16792.pem"
	I0805 23:46:25.362145   47941 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/16792.pem
	I0805 23:46:25.366828   47941 command_runner.go:130] > -rw-r--r-- 1 root root 1338 Aug  5 23:03 /usr/share/ca-certificates/16792.pem
	I0805 23:46:25.366860   47941 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Aug  5 23:03 /usr/share/ca-certificates/16792.pem
	I0805 23:46:25.366918   47941 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/16792.pem
	I0805 23:46:25.372682   47941 command_runner.go:130] > 51391683
	I0805 23:46:25.372754   47941 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/16792.pem /etc/ssl/certs/51391683.0"
	I0805 23:46:25.382568   47941 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/167922.pem && ln -fs /usr/share/ca-certificates/167922.pem /etc/ssl/certs/167922.pem"
	I0805 23:46:25.393372   47941 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/167922.pem
	I0805 23:46:25.397795   47941 command_runner.go:130] > -rw-r--r-- 1 root root 1708 Aug  5 23:03 /usr/share/ca-certificates/167922.pem
	I0805 23:46:25.397964   47941 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Aug  5 23:03 /usr/share/ca-certificates/167922.pem
	I0805 23:46:25.398009   47941 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/167922.pem
	I0805 23:46:25.403461   47941 command_runner.go:130] > 3ec20f2e
	I0805 23:46:25.403517   47941 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/167922.pem /etc/ssl/certs/3ec20f2e.0"
	I0805 23:46:25.412918   47941 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0805 23:46:25.425122   47941 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0805 23:46:25.429785   47941 command_runner.go:130] > -rw-r--r-- 1 root root 1111 Aug  5 22:50 /usr/share/ca-certificates/minikubeCA.pem
	I0805 23:46:25.429937   47941 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Aug  5 22:50 /usr/share/ca-certificates/minikubeCA.pem
	I0805 23:46:25.429984   47941 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0805 23:46:25.435580   47941 command_runner.go:130] > b5213941
	I0805 23:46:25.435643   47941 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0805 23:46:25.445301   47941 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0805 23:46:25.449790   47941 command_runner.go:130] >   File: /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0805 23:46:25.449817   47941 command_runner.go:130] >   Size: 1176      	Blocks: 8          IO Block: 4096   regular file
	I0805 23:46:25.449825   47941 command_runner.go:130] > Device: 253,1	Inode: 4197931     Links: 1
	I0805 23:46:25.449834   47941 command_runner.go:130] > Access: (0644/-rw-r--r--)  Uid: (    0/    root)   Gid: (    0/    root)
	I0805 23:46:25.449842   47941 command_runner.go:130] > Access: 2024-08-05 23:39:24.040104079 +0000
	I0805 23:46:25.449850   47941 command_runner.go:130] > Modify: 2024-08-05 23:39:24.040104079 +0000
	I0805 23:46:25.449857   47941 command_runner.go:130] > Change: 2024-08-05 23:39:24.040104079 +0000
	I0805 23:46:25.449867   47941 command_runner.go:130] >  Birth: 2024-08-05 23:39:24.040104079 +0000
	I0805 23:46:25.449954   47941 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0805 23:46:25.455723   47941 command_runner.go:130] > Certificate will not expire
	I0805 23:46:25.455957   47941 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0805 23:46:25.461732   47941 command_runner.go:130] > Certificate will not expire
	I0805 23:46:25.461842   47941 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0805 23:46:25.467480   47941 command_runner.go:130] > Certificate will not expire
	I0805 23:46:25.467552   47941 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0805 23:46:25.473223   47941 command_runner.go:130] > Certificate will not expire
	I0805 23:46:25.473271   47941 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0805 23:46:25.478777   47941 command_runner.go:130] > Certificate will not expire
	I0805 23:46:25.478849   47941 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0805 23:46:25.484327   47941 command_runner.go:130] > Certificate will not expire
	I0805 23:46:25.484386   47941 kubeadm.go:392] StartCluster: {Name:multinode-342677 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19339/minikube-v1.33.1-1722248113-19339-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.
3 ClusterName:multinode-342677 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.10 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.89 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:false Worker:true} {Name:m03 IP:192.168.39.75 Port:0 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false in
spektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOpt
imizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0805 23:46:25.484496   47941 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0805 23:46:25.484543   47941 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0805 23:46:25.523255   47941 command_runner.go:130] > 00b5601d99857e510793bf69888cd3fe706ae1919ea2fa58cd1e49e2cc8fe8f3
	I0805 23:46:25.523287   47941 command_runner.go:130] > c8b4c139ba9f34045731ac7ff528df9d7aebaf6e5e682eeac1c47b5710313379
	I0805 23:46:25.523297   47941 command_runner.go:130] > 150ce9e294a897bb1eee154f726f0956df1618247219dcd049722c011dbe331e
	I0805 23:46:25.523308   47941 command_runner.go:130] > 3b1d0ef18e29d3787609be51f754e7f2324ee16d19d999762bac401d079a7fd2
	I0805 23:46:25.523317   47941 command_runner.go:130] > f227e8cf03b66f737d02e2c7b817576ad72901aa61a0e63d337fb36ec9c32943
	I0805 23:46:25.523325   47941 command_runner.go:130] > 5cc7242052f30bef2f21e600e245b76900de63c25a681c55c467489b4bb4cad9
	I0805 23:46:25.523334   47941 command_runner.go:130] > 30e13b94e51e4836e65d865d70745d086a906658385b8b067fe0d8e69095705e
	I0805 23:46:25.523344   47941 command_runner.go:130] > 9d3772211d8011c9a6554ddc5569f3920bbe3050b56a031062e0557cf43be0e2
	I0805 23:46:25.523375   47941 cri.go:89] found id: "00b5601d99857e510793bf69888cd3fe706ae1919ea2fa58cd1e49e2cc8fe8f3"
	I0805 23:46:25.523387   47941 cri.go:89] found id: "c8b4c139ba9f34045731ac7ff528df9d7aebaf6e5e682eeac1c47b5710313379"
	I0805 23:46:25.523393   47941 cri.go:89] found id: "150ce9e294a897bb1eee154f726f0956df1618247219dcd049722c011dbe331e"
	I0805 23:46:25.523398   47941 cri.go:89] found id: "3b1d0ef18e29d3787609be51f754e7f2324ee16d19d999762bac401d079a7fd2"
	I0805 23:46:25.523402   47941 cri.go:89] found id: "f227e8cf03b66f737d02e2c7b817576ad72901aa61a0e63d337fb36ec9c32943"
	I0805 23:46:25.523406   47941 cri.go:89] found id: "5cc7242052f30bef2f21e600e245b76900de63c25a681c55c467489b4bb4cad9"
	I0805 23:46:25.523409   47941 cri.go:89] found id: "30e13b94e51e4836e65d865d70745d086a906658385b8b067fe0d8e69095705e"
	I0805 23:46:25.523413   47941 cri.go:89] found id: "9d3772211d8011c9a6554ddc5569f3920bbe3050b56a031062e0557cf43be0e2"
	I0805 23:46:25.523415   47941 cri.go:89] found id: ""
	I0805 23:46:25.523455   47941 ssh_runner.go:195] Run: sudo runc list -f json
	
	
	==> CRI-O <==
	Aug 05 23:50:38 multinode-342677 crio[2886]: time="2024-08-05 23:50:38.778007147Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1722901838777944031,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:143053,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=baeaf094-e789-4037-9740-d46ff741cfe9 name=/runtime.v1.ImageService/ImageFsInfo
	Aug 05 23:50:38 multinode-342677 crio[2886]: time="2024-08-05 23:50:38.778811038Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=41105f5a-e03e-4bde-9c82-8bf05ece8534 name=/runtime.v1.RuntimeService/ListContainers
	Aug 05 23:50:38 multinode-342677 crio[2886]: time="2024-08-05 23:50:38.778885198Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=41105f5a-e03e-4bde-9c82-8bf05ece8534 name=/runtime.v1.RuntimeService/ListContainers
	Aug 05 23:50:38 multinode-342677 crio[2886]: time="2024-08-05 23:50:38.779271676Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:a0e5a6b7ec4cef49b6d8fde0b12859c53f333ffac6eb59a728ac65e9274ba3bf,PodSandboxId:9c058b71634ff24060e0e8b0c1b24a92c2863e78046e35171cf27bb43980ef81,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1722901625795332746,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-fc5497c4f-78mt7,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 2761ea7e-d8a2-40d3-bd8d-a2e484b0bec3,},Annotations:map[string]string{io.kubernetes.container.hash: 5d980674,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessage
Path: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b62f62a176c70a254e48450706f9b7524e717202076210725309a1e6c28138bc,PodSandboxId:ee8ad4fa44950fa02aa006da747e47186dc3d9aa497736cc81b26273582092da,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:1,},Image:&ImageSpec{Image:917d7814b9b5b870a04ef7bbee5414485a7ba8385be9c26e1df27463f6fb6557,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:917d7814b9b5b870a04ef7bbee5414485a7ba8385be9c26e1df27463f6fb6557,State:CONTAINER_RUNNING,CreatedAt:1722901592221740056,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-6c596,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a8a66d1c-c60f-4a75-8104-151faf7922b9,},Annotations:map[string]string{io.kubernetes.container.hash: 4ff057c6,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kube
rnetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6797a9b46983671da1ba766fb34f5be198a6f9d393ac2f171339c0def77c28e1,PodSandboxId:50b50402089f5ef893f2e1020443ac3740204eafd51c0bb3b0c1a95387d4a4f4,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1722901592169187259,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-v42dl,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f82457c8-44fc-476d-828b-ac33899c132b,},Annotations:map[string]string{io.kubernetes.container.hash: 66c772fe,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\
":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3b79ca8015a145db755007359177d373f8fb63ee8d261e67f64838e7af497133,PodSandboxId:f794a28dee848aa9a6e5f529ff4b0ac13bcdc20f7efc577d10f83af4d0a7f96e,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,State:CONTAINER_RUNNING,CreatedAt:1722901591997430400,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-2dnzb,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ddda1087-36af-4e82-88d3-54e6348c5e22,},Annotations:map[string]
string{io.kubernetes.container.hash: 10979fcd,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a5a92de8354098f5dc94e8aa94bb5d5aad51d11e8a6025988fd20a80568eee49,PodSandboxId:b37ed3d64fd18e7e93f99b002137d4d76775a81143bf44397c5edabd2b86fcc0,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1722901591963793102,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 71064ea8-4354-4f74-9efc-52487675def4,},Annotations:map[string]string{io.ku
bernetes.container.hash: f7b2680,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:943c42b387fc724e757ea4b76361e6a758b577b524c7c10390b65369cea51422,PodSandboxId:054f8fceeb7cdcfd41e776e32bd55b59140891b42d00369b60b9aaad1a58465c,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,State:CONTAINER_RUNNING,CreatedAt:1722901588232266700,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-multinode-342677,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2f1a0b5192f07729588fefe71f91e855,},Annotations:map[string]string{io.kubernetes.contai
ner.hash: 7337c8d9,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:268fef5c96aefcf57fc17aa09c4ebf2c737c37b5bdc83fe67a396bfa1b804384,PodSandboxId:365c7bc7a2b1ca350880d983794b4c6feb8597b97f66dfc3b8a048ac9720136c,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1722901588210215125,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-multinode-342677,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: fe9d35cc8086dee57d5df88c8a99e7d8,},Annotations:map[string]string{io.kubernetes.container.hash: f19e9654,io.kubernetes.container.r
estartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:45dfe807a437e2efa653406b8d23df5726dff92deecdb42360742ab37c64c201,PodSandboxId:ac8cb2ec338da417c69fee22ef95616b800021fb34ffc5c70712e3fcbf35a0d0,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,State:CONTAINER_RUNNING,CreatedAt:1722901588202172871,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-multinode-342677,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 20dbf724a8b080d422a73b396072a4c7,},Annotations:map[string]string{io.kubernetes.container.hash: bcb4918f,io.kubernete
s.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:bdaf3015949ce621acf67c07735918381578c9af19ebd3e5221f87a4cd2af079,PodSandboxId:3b873b27fc9d25b6c540d99c09926b8080f05d4f2343f9b81ce0b3b945380ea9,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,State:CONTAINER_RUNNING,CreatedAt:1722901588169836346,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-multinode-342677,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: efe0bdf356940b826a4ba3b020e6529c,},Annotations:map[string]string{io.kubernetes.container.hash: d82bc00b,io.kubernetes.container.re
startCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4ca4e362daa3cf637429cc280868001a87ead2a1c6b86c42ca8880864eb2b33b,PodSandboxId:cbe52e52d544551a4eaeb48f07902ba0252b2e562e0b7426cc66a20762a4a053,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_EXITED,CreatedAt:1722901260953256689,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-fc5497c4f-78mt7,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 2761ea7e-d8a2-40d3-bd8d-a2e484b0bec3,},Annotations:map[string]string{io.kubernetes.container.hash: 5d980674,io.kubernetes.container.rest
artCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:00b5601d99857e510793bf69888cd3fe706ae1919ea2fa58cd1e49e2cc8fe8f3,PodSandboxId:3d9e1ffaf8822a12d115003c1883d1817957a0bcb4e4d516649e4b91ab06ba3c,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1722901206217934742,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-v42dl,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f82457c8-44fc-476d-828b-ac33899c132b,},Annotations:map[string]string{io.kubernetes.container.hash: 66c772fe,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containe
rPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c8b4c139ba9f34045731ac7ff528df9d7aebaf6e5e682eeac1c47b5710313379,PodSandboxId:a8259144d8379242d353a86d5adec712cc26b0a08e440fabe5668e9603e2a7e1,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1722901204667548032,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.n
amespace: kube-system,io.kubernetes.pod.uid: 71064ea8-4354-4f74-9efc-52487675def4,},Annotations:map[string]string{io.kubernetes.container.hash: f7b2680,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:150ce9e294a897bb1eee154f726f0956df1618247219dcd049722c011dbe331e,PodSandboxId:0e5f1c11948a79d3e2c7d6179de4c73e196695f95a60f9892b69f6ec45c16d38,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:docker.io/kindest/kindnetd@sha256:4067b91686869e19bac601aec305ba55d2e74cdcb91347869bfb4fd3a26cd3c3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:917d7814b9b5b870a04ef7bbee5414485a7ba8385be9c26e1df27463f6fb6557,State:CONTAINER_EXITED,CreatedAt:1722901192992033098,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-6c596,io.kubernetes.pod.nam
espace: kube-system,io.kubernetes.pod.uid: a8a66d1c-c60f-4a75-8104-151faf7922b9,},Annotations:map[string]string{io.kubernetes.container.hash: 4ff057c6,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3b1d0ef18e29d3787609be51f754e7f2324ee16d19d999762bac401d079a7fd2,PodSandboxId:353f772917eac257829637e206a259eb1f44afeea71efacfab5ce5d5af8892b0,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,State:CONTAINER_EXITED,CreatedAt:1722901189044346374,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-2dnzb,io.kubernetes.pod.namespace: kube-system,io.kubernetes.
pod.uid: ddda1087-36af-4e82-88d3-54e6348c5e22,},Annotations:map[string]string{io.kubernetes.container.hash: 10979fcd,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:30e13b94e51e4836e65d865d70745d086a906658385b8b067fe0d8e69095705e,PodSandboxId:18c7ba91ecfa22ab34982fbb76f08587f43a0f966129ab03ec78113f3a756e1c,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_EXITED,CreatedAt:1722901168318903940,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-multinode-342677,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: fe9d35cc8086dee57d5df88c8a99e7d8
,},Annotations:map[string]string{io.kubernetes.container.hash: f19e9654,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f227e8cf03b66f737d02e2c7b817576ad72901aa61a0e63d337fb36ec9c32943,PodSandboxId:08f166f93ecd6c382e72917c7c4f41f7606ffe9bf055ba92daa336772820b451,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,State:CONTAINER_EXITED,CreatedAt:1722901168373329147,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-multinode-342677,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: efe0bdf356940b826a4ba3b020e6529c,},Annotations:
map[string]string{io.kubernetes.container.hash: d82bc00b,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5cc7242052f30bef2f21e600e245b76900de63c25a681c55c467489b4bb4cad9,PodSandboxId:1cfcc3af05ebb8e14183a8bcbdba732a57b181af412a613bd2bbd2579cebbef4,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,State:CONTAINER_EXITED,CreatedAt:1722901168327637273,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-multinode-342677,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2f1a0b5192f07729588fefe71f91e855,},Annotations:map[string]stri
ng{io.kubernetes.container.hash: 7337c8d9,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9d3772211d8011c9a6554ddc5569f3920bbe3050b56a031062e0557cf43be0e2,PodSandboxId:206121b53ee872114e8fe65e58499c97a566a2df9439ac8dd4a51eaa92a99fa1,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,State:CONTAINER_EXITED,CreatedAt:1722901168281340148,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-multinode-342677,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 20dbf724a8b080d422a73b396072a4c7,},Annotations:map
[string]string{io.kubernetes.container.hash: bcb4918f,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=41105f5a-e03e-4bde-9c82-8bf05ece8534 name=/runtime.v1.RuntimeService/ListContainers
	Aug 05 23:50:38 multinode-342677 crio[2886]: time="2024-08-05 23:50:38.825091592Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=3d15be71-b164-4280-af2e-2188ab011964 name=/runtime.v1.RuntimeService/Version
	Aug 05 23:50:38 multinode-342677 crio[2886]: time="2024-08-05 23:50:38.825187923Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=3d15be71-b164-4280-af2e-2188ab011964 name=/runtime.v1.RuntimeService/Version
	Aug 05 23:50:38 multinode-342677 crio[2886]: time="2024-08-05 23:50:38.826486835Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=0ddf5f18-d3bd-4233-b876-598a10bef725 name=/runtime.v1.ImageService/ImageFsInfo
	Aug 05 23:50:38 multinode-342677 crio[2886]: time="2024-08-05 23:50:38.827108960Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1722901838827085295,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:143053,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=0ddf5f18-d3bd-4233-b876-598a10bef725 name=/runtime.v1.ImageService/ImageFsInfo
	Aug 05 23:50:38 multinode-342677 crio[2886]: time="2024-08-05 23:50:38.827626272Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=d0301dfe-949f-43d2-a8d1-3ce913245bad name=/runtime.v1.RuntimeService/ListContainers
	Aug 05 23:50:38 multinode-342677 crio[2886]: time="2024-08-05 23:50:38.827773360Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=d0301dfe-949f-43d2-a8d1-3ce913245bad name=/runtime.v1.RuntimeService/ListContainers
	Aug 05 23:50:38 multinode-342677 crio[2886]: time="2024-08-05 23:50:38.828157322Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:a0e5a6b7ec4cef49b6d8fde0b12859c53f333ffac6eb59a728ac65e9274ba3bf,PodSandboxId:9c058b71634ff24060e0e8b0c1b24a92c2863e78046e35171cf27bb43980ef81,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1722901625795332746,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-fc5497c4f-78mt7,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 2761ea7e-d8a2-40d3-bd8d-a2e484b0bec3,},Annotations:map[string]string{io.kubernetes.container.hash: 5d980674,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessage
Path: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b62f62a176c70a254e48450706f9b7524e717202076210725309a1e6c28138bc,PodSandboxId:ee8ad4fa44950fa02aa006da747e47186dc3d9aa497736cc81b26273582092da,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:1,},Image:&ImageSpec{Image:917d7814b9b5b870a04ef7bbee5414485a7ba8385be9c26e1df27463f6fb6557,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:917d7814b9b5b870a04ef7bbee5414485a7ba8385be9c26e1df27463f6fb6557,State:CONTAINER_RUNNING,CreatedAt:1722901592221740056,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-6c596,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a8a66d1c-c60f-4a75-8104-151faf7922b9,},Annotations:map[string]string{io.kubernetes.container.hash: 4ff057c6,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kube
rnetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6797a9b46983671da1ba766fb34f5be198a6f9d393ac2f171339c0def77c28e1,PodSandboxId:50b50402089f5ef893f2e1020443ac3740204eafd51c0bb3b0c1a95387d4a4f4,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1722901592169187259,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-v42dl,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f82457c8-44fc-476d-828b-ac33899c132b,},Annotations:map[string]string{io.kubernetes.container.hash: 66c772fe,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\
":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3b79ca8015a145db755007359177d373f8fb63ee8d261e67f64838e7af497133,PodSandboxId:f794a28dee848aa9a6e5f529ff4b0ac13bcdc20f7efc577d10f83af4d0a7f96e,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,State:CONTAINER_RUNNING,CreatedAt:1722901591997430400,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-2dnzb,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ddda1087-36af-4e82-88d3-54e6348c5e22,},Annotations:map[string]
string{io.kubernetes.container.hash: 10979fcd,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a5a92de8354098f5dc94e8aa94bb5d5aad51d11e8a6025988fd20a80568eee49,PodSandboxId:b37ed3d64fd18e7e93f99b002137d4d76775a81143bf44397c5edabd2b86fcc0,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1722901591963793102,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 71064ea8-4354-4f74-9efc-52487675def4,},Annotations:map[string]string{io.ku
bernetes.container.hash: f7b2680,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:943c42b387fc724e757ea4b76361e6a758b577b524c7c10390b65369cea51422,PodSandboxId:054f8fceeb7cdcfd41e776e32bd55b59140891b42d00369b60b9aaad1a58465c,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,State:CONTAINER_RUNNING,CreatedAt:1722901588232266700,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-multinode-342677,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2f1a0b5192f07729588fefe71f91e855,},Annotations:map[string]string{io.kubernetes.contai
ner.hash: 7337c8d9,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:268fef5c96aefcf57fc17aa09c4ebf2c737c37b5bdc83fe67a396bfa1b804384,PodSandboxId:365c7bc7a2b1ca350880d983794b4c6feb8597b97f66dfc3b8a048ac9720136c,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1722901588210215125,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-multinode-342677,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: fe9d35cc8086dee57d5df88c8a99e7d8,},Annotations:map[string]string{io.kubernetes.container.hash: f19e9654,io.kubernetes.container.r
estartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:45dfe807a437e2efa653406b8d23df5726dff92deecdb42360742ab37c64c201,PodSandboxId:ac8cb2ec338da417c69fee22ef95616b800021fb34ffc5c70712e3fcbf35a0d0,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,State:CONTAINER_RUNNING,CreatedAt:1722901588202172871,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-multinode-342677,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 20dbf724a8b080d422a73b396072a4c7,},Annotations:map[string]string{io.kubernetes.container.hash: bcb4918f,io.kubernete
s.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:bdaf3015949ce621acf67c07735918381578c9af19ebd3e5221f87a4cd2af079,PodSandboxId:3b873b27fc9d25b6c540d99c09926b8080f05d4f2343f9b81ce0b3b945380ea9,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,State:CONTAINER_RUNNING,CreatedAt:1722901588169836346,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-multinode-342677,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: efe0bdf356940b826a4ba3b020e6529c,},Annotations:map[string]string{io.kubernetes.container.hash: d82bc00b,io.kubernetes.container.re
startCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4ca4e362daa3cf637429cc280868001a87ead2a1c6b86c42ca8880864eb2b33b,PodSandboxId:cbe52e52d544551a4eaeb48f07902ba0252b2e562e0b7426cc66a20762a4a053,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_EXITED,CreatedAt:1722901260953256689,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-fc5497c4f-78mt7,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 2761ea7e-d8a2-40d3-bd8d-a2e484b0bec3,},Annotations:map[string]string{io.kubernetes.container.hash: 5d980674,io.kubernetes.container.rest
artCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:00b5601d99857e510793bf69888cd3fe706ae1919ea2fa58cd1e49e2cc8fe8f3,PodSandboxId:3d9e1ffaf8822a12d115003c1883d1817957a0bcb4e4d516649e4b91ab06ba3c,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1722901206217934742,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-v42dl,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f82457c8-44fc-476d-828b-ac33899c132b,},Annotations:map[string]string{io.kubernetes.container.hash: 66c772fe,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containe
rPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c8b4c139ba9f34045731ac7ff528df9d7aebaf6e5e682eeac1c47b5710313379,PodSandboxId:a8259144d8379242d353a86d5adec712cc26b0a08e440fabe5668e9603e2a7e1,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1722901204667548032,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.n
amespace: kube-system,io.kubernetes.pod.uid: 71064ea8-4354-4f74-9efc-52487675def4,},Annotations:map[string]string{io.kubernetes.container.hash: f7b2680,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:150ce9e294a897bb1eee154f726f0956df1618247219dcd049722c011dbe331e,PodSandboxId:0e5f1c11948a79d3e2c7d6179de4c73e196695f95a60f9892b69f6ec45c16d38,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:docker.io/kindest/kindnetd@sha256:4067b91686869e19bac601aec305ba55d2e74cdcb91347869bfb4fd3a26cd3c3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:917d7814b9b5b870a04ef7bbee5414485a7ba8385be9c26e1df27463f6fb6557,State:CONTAINER_EXITED,CreatedAt:1722901192992033098,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-6c596,io.kubernetes.pod.nam
espace: kube-system,io.kubernetes.pod.uid: a8a66d1c-c60f-4a75-8104-151faf7922b9,},Annotations:map[string]string{io.kubernetes.container.hash: 4ff057c6,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3b1d0ef18e29d3787609be51f754e7f2324ee16d19d999762bac401d079a7fd2,PodSandboxId:353f772917eac257829637e206a259eb1f44afeea71efacfab5ce5d5af8892b0,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,State:CONTAINER_EXITED,CreatedAt:1722901189044346374,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-2dnzb,io.kubernetes.pod.namespace: kube-system,io.kubernetes.
pod.uid: ddda1087-36af-4e82-88d3-54e6348c5e22,},Annotations:map[string]string{io.kubernetes.container.hash: 10979fcd,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:30e13b94e51e4836e65d865d70745d086a906658385b8b067fe0d8e69095705e,PodSandboxId:18c7ba91ecfa22ab34982fbb76f08587f43a0f966129ab03ec78113f3a756e1c,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_EXITED,CreatedAt:1722901168318903940,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-multinode-342677,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: fe9d35cc8086dee57d5df88c8a99e7d8
,},Annotations:map[string]string{io.kubernetes.container.hash: f19e9654,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f227e8cf03b66f737d02e2c7b817576ad72901aa61a0e63d337fb36ec9c32943,PodSandboxId:08f166f93ecd6c382e72917c7c4f41f7606ffe9bf055ba92daa336772820b451,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,State:CONTAINER_EXITED,CreatedAt:1722901168373329147,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-multinode-342677,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: efe0bdf356940b826a4ba3b020e6529c,},Annotations:
map[string]string{io.kubernetes.container.hash: d82bc00b,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5cc7242052f30bef2f21e600e245b76900de63c25a681c55c467489b4bb4cad9,PodSandboxId:1cfcc3af05ebb8e14183a8bcbdba732a57b181af412a613bd2bbd2579cebbef4,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,State:CONTAINER_EXITED,CreatedAt:1722901168327637273,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-multinode-342677,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2f1a0b5192f07729588fefe71f91e855,},Annotations:map[string]stri
ng{io.kubernetes.container.hash: 7337c8d9,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9d3772211d8011c9a6554ddc5569f3920bbe3050b56a031062e0557cf43be0e2,PodSandboxId:206121b53ee872114e8fe65e58499c97a566a2df9439ac8dd4a51eaa92a99fa1,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,State:CONTAINER_EXITED,CreatedAt:1722901168281340148,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-multinode-342677,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 20dbf724a8b080d422a73b396072a4c7,},Annotations:map
[string]string{io.kubernetes.container.hash: bcb4918f,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=d0301dfe-949f-43d2-a8d1-3ce913245bad name=/runtime.v1.RuntimeService/ListContainers
	Aug 05 23:50:38 multinode-342677 crio[2886]: time="2024-08-05 23:50:38.869991181Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=023f9665-0859-4b05-8c20-7d3d33c8a486 name=/runtime.v1.RuntimeService/Version
	Aug 05 23:50:38 multinode-342677 crio[2886]: time="2024-08-05 23:50:38.870084993Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=023f9665-0859-4b05-8c20-7d3d33c8a486 name=/runtime.v1.RuntimeService/Version
	Aug 05 23:50:38 multinode-342677 crio[2886]: time="2024-08-05 23:50:38.871422495Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=20de80d2-b255-4970-90b0-ff328fbd6981 name=/runtime.v1.ImageService/ImageFsInfo
	Aug 05 23:50:38 multinode-342677 crio[2886]: time="2024-08-05 23:50:38.872040807Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1722901838872014374,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:143053,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=20de80d2-b255-4970-90b0-ff328fbd6981 name=/runtime.v1.ImageService/ImageFsInfo
	Aug 05 23:50:38 multinode-342677 crio[2886]: time="2024-08-05 23:50:38.872597223Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=00f9d8ea-4124-4a19-9852-6ed11dcb38fd name=/runtime.v1.RuntimeService/ListContainers
	Aug 05 23:50:38 multinode-342677 crio[2886]: time="2024-08-05 23:50:38.872713027Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=00f9d8ea-4124-4a19-9852-6ed11dcb38fd name=/runtime.v1.RuntimeService/ListContainers
	Aug 05 23:50:38 multinode-342677 crio[2886]: time="2024-08-05 23:50:38.873066808Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:a0e5a6b7ec4cef49b6d8fde0b12859c53f333ffac6eb59a728ac65e9274ba3bf,PodSandboxId:9c058b71634ff24060e0e8b0c1b24a92c2863e78046e35171cf27bb43980ef81,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1722901625795332746,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-fc5497c4f-78mt7,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 2761ea7e-d8a2-40d3-bd8d-a2e484b0bec3,},Annotations:map[string]string{io.kubernetes.container.hash: 5d980674,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessage
Path: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b62f62a176c70a254e48450706f9b7524e717202076210725309a1e6c28138bc,PodSandboxId:ee8ad4fa44950fa02aa006da747e47186dc3d9aa497736cc81b26273582092da,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:1,},Image:&ImageSpec{Image:917d7814b9b5b870a04ef7bbee5414485a7ba8385be9c26e1df27463f6fb6557,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:917d7814b9b5b870a04ef7bbee5414485a7ba8385be9c26e1df27463f6fb6557,State:CONTAINER_RUNNING,CreatedAt:1722901592221740056,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-6c596,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a8a66d1c-c60f-4a75-8104-151faf7922b9,},Annotations:map[string]string{io.kubernetes.container.hash: 4ff057c6,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kube
rnetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6797a9b46983671da1ba766fb34f5be198a6f9d393ac2f171339c0def77c28e1,PodSandboxId:50b50402089f5ef893f2e1020443ac3740204eafd51c0bb3b0c1a95387d4a4f4,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1722901592169187259,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-v42dl,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f82457c8-44fc-476d-828b-ac33899c132b,},Annotations:map[string]string{io.kubernetes.container.hash: 66c772fe,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\
":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3b79ca8015a145db755007359177d373f8fb63ee8d261e67f64838e7af497133,PodSandboxId:f794a28dee848aa9a6e5f529ff4b0ac13bcdc20f7efc577d10f83af4d0a7f96e,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,State:CONTAINER_RUNNING,CreatedAt:1722901591997430400,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-2dnzb,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ddda1087-36af-4e82-88d3-54e6348c5e22,},Annotations:map[string]
string{io.kubernetes.container.hash: 10979fcd,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a5a92de8354098f5dc94e8aa94bb5d5aad51d11e8a6025988fd20a80568eee49,PodSandboxId:b37ed3d64fd18e7e93f99b002137d4d76775a81143bf44397c5edabd2b86fcc0,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1722901591963793102,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 71064ea8-4354-4f74-9efc-52487675def4,},Annotations:map[string]string{io.ku
bernetes.container.hash: f7b2680,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:943c42b387fc724e757ea4b76361e6a758b577b524c7c10390b65369cea51422,PodSandboxId:054f8fceeb7cdcfd41e776e32bd55b59140891b42d00369b60b9aaad1a58465c,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,State:CONTAINER_RUNNING,CreatedAt:1722901588232266700,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-multinode-342677,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2f1a0b5192f07729588fefe71f91e855,},Annotations:map[string]string{io.kubernetes.contai
ner.hash: 7337c8d9,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:268fef5c96aefcf57fc17aa09c4ebf2c737c37b5bdc83fe67a396bfa1b804384,PodSandboxId:365c7bc7a2b1ca350880d983794b4c6feb8597b97f66dfc3b8a048ac9720136c,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1722901588210215125,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-multinode-342677,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: fe9d35cc8086dee57d5df88c8a99e7d8,},Annotations:map[string]string{io.kubernetes.container.hash: f19e9654,io.kubernetes.container.r
estartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:45dfe807a437e2efa653406b8d23df5726dff92deecdb42360742ab37c64c201,PodSandboxId:ac8cb2ec338da417c69fee22ef95616b800021fb34ffc5c70712e3fcbf35a0d0,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,State:CONTAINER_RUNNING,CreatedAt:1722901588202172871,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-multinode-342677,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 20dbf724a8b080d422a73b396072a4c7,},Annotations:map[string]string{io.kubernetes.container.hash: bcb4918f,io.kubernete
s.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:bdaf3015949ce621acf67c07735918381578c9af19ebd3e5221f87a4cd2af079,PodSandboxId:3b873b27fc9d25b6c540d99c09926b8080f05d4f2343f9b81ce0b3b945380ea9,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,State:CONTAINER_RUNNING,CreatedAt:1722901588169836346,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-multinode-342677,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: efe0bdf356940b826a4ba3b020e6529c,},Annotations:map[string]string{io.kubernetes.container.hash: d82bc00b,io.kubernetes.container.re
startCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4ca4e362daa3cf637429cc280868001a87ead2a1c6b86c42ca8880864eb2b33b,PodSandboxId:cbe52e52d544551a4eaeb48f07902ba0252b2e562e0b7426cc66a20762a4a053,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_EXITED,CreatedAt:1722901260953256689,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-fc5497c4f-78mt7,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 2761ea7e-d8a2-40d3-bd8d-a2e484b0bec3,},Annotations:map[string]string{io.kubernetes.container.hash: 5d980674,io.kubernetes.container.rest
artCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:00b5601d99857e510793bf69888cd3fe706ae1919ea2fa58cd1e49e2cc8fe8f3,PodSandboxId:3d9e1ffaf8822a12d115003c1883d1817957a0bcb4e4d516649e4b91ab06ba3c,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1722901206217934742,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-v42dl,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f82457c8-44fc-476d-828b-ac33899c132b,},Annotations:map[string]string{io.kubernetes.container.hash: 66c772fe,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containe
rPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c8b4c139ba9f34045731ac7ff528df9d7aebaf6e5e682eeac1c47b5710313379,PodSandboxId:a8259144d8379242d353a86d5adec712cc26b0a08e440fabe5668e9603e2a7e1,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1722901204667548032,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.n
amespace: kube-system,io.kubernetes.pod.uid: 71064ea8-4354-4f74-9efc-52487675def4,},Annotations:map[string]string{io.kubernetes.container.hash: f7b2680,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:150ce9e294a897bb1eee154f726f0956df1618247219dcd049722c011dbe331e,PodSandboxId:0e5f1c11948a79d3e2c7d6179de4c73e196695f95a60f9892b69f6ec45c16d38,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:docker.io/kindest/kindnetd@sha256:4067b91686869e19bac601aec305ba55d2e74cdcb91347869bfb4fd3a26cd3c3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:917d7814b9b5b870a04ef7bbee5414485a7ba8385be9c26e1df27463f6fb6557,State:CONTAINER_EXITED,CreatedAt:1722901192992033098,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-6c596,io.kubernetes.pod.nam
espace: kube-system,io.kubernetes.pod.uid: a8a66d1c-c60f-4a75-8104-151faf7922b9,},Annotations:map[string]string{io.kubernetes.container.hash: 4ff057c6,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3b1d0ef18e29d3787609be51f754e7f2324ee16d19d999762bac401d079a7fd2,PodSandboxId:353f772917eac257829637e206a259eb1f44afeea71efacfab5ce5d5af8892b0,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,State:CONTAINER_EXITED,CreatedAt:1722901189044346374,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-2dnzb,io.kubernetes.pod.namespace: kube-system,io.kubernetes.
pod.uid: ddda1087-36af-4e82-88d3-54e6348c5e22,},Annotations:map[string]string{io.kubernetes.container.hash: 10979fcd,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:30e13b94e51e4836e65d865d70745d086a906658385b8b067fe0d8e69095705e,PodSandboxId:18c7ba91ecfa22ab34982fbb76f08587f43a0f966129ab03ec78113f3a756e1c,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_EXITED,CreatedAt:1722901168318903940,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-multinode-342677,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: fe9d35cc8086dee57d5df88c8a99e7d8
,},Annotations:map[string]string{io.kubernetes.container.hash: f19e9654,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f227e8cf03b66f737d02e2c7b817576ad72901aa61a0e63d337fb36ec9c32943,PodSandboxId:08f166f93ecd6c382e72917c7c4f41f7606ffe9bf055ba92daa336772820b451,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,State:CONTAINER_EXITED,CreatedAt:1722901168373329147,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-multinode-342677,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: efe0bdf356940b826a4ba3b020e6529c,},Annotations:
map[string]string{io.kubernetes.container.hash: d82bc00b,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5cc7242052f30bef2f21e600e245b76900de63c25a681c55c467489b4bb4cad9,PodSandboxId:1cfcc3af05ebb8e14183a8bcbdba732a57b181af412a613bd2bbd2579cebbef4,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,State:CONTAINER_EXITED,CreatedAt:1722901168327637273,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-multinode-342677,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2f1a0b5192f07729588fefe71f91e855,},Annotations:map[string]stri
ng{io.kubernetes.container.hash: 7337c8d9,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9d3772211d8011c9a6554ddc5569f3920bbe3050b56a031062e0557cf43be0e2,PodSandboxId:206121b53ee872114e8fe65e58499c97a566a2df9439ac8dd4a51eaa92a99fa1,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,State:CONTAINER_EXITED,CreatedAt:1722901168281340148,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-multinode-342677,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 20dbf724a8b080d422a73b396072a4c7,},Annotations:map
[string]string{io.kubernetes.container.hash: bcb4918f,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=00f9d8ea-4124-4a19-9852-6ed11dcb38fd name=/runtime.v1.RuntimeService/ListContainers
	Aug 05 23:50:38 multinode-342677 crio[2886]: time="2024-08-05 23:50:38.914070518Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=bdbfd2ed-3165-4fb3-a6f1-a9496677b4a2 name=/runtime.v1.RuntimeService/Version
	Aug 05 23:50:38 multinode-342677 crio[2886]: time="2024-08-05 23:50:38.914144110Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=bdbfd2ed-3165-4fb3-a6f1-a9496677b4a2 name=/runtime.v1.RuntimeService/Version
	Aug 05 23:50:38 multinode-342677 crio[2886]: time="2024-08-05 23:50:38.915955300Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=9eddec2f-ffc1-45d5-b63b-744ee67f62dc name=/runtime.v1.ImageService/ImageFsInfo
	Aug 05 23:50:38 multinode-342677 crio[2886]: time="2024-08-05 23:50:38.916586702Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1722901838916379488,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:143053,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=9eddec2f-ffc1-45d5-b63b-744ee67f62dc name=/runtime.v1.ImageService/ImageFsInfo
	Aug 05 23:50:38 multinode-342677 crio[2886]: time="2024-08-05 23:50:38.917310634Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=67adab1b-ba76-47a9-a0e6-d5a2b64ed4e5 name=/runtime.v1.RuntimeService/ListContainers
	Aug 05 23:50:38 multinode-342677 crio[2886]: time="2024-08-05 23:50:38.917381709Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=67adab1b-ba76-47a9-a0e6-d5a2b64ed4e5 name=/runtime.v1.RuntimeService/ListContainers
	Aug 05 23:50:38 multinode-342677 crio[2886]: time="2024-08-05 23:50:38.917816422Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:a0e5a6b7ec4cef49b6d8fde0b12859c53f333ffac6eb59a728ac65e9274ba3bf,PodSandboxId:9c058b71634ff24060e0e8b0c1b24a92c2863e78046e35171cf27bb43980ef81,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1722901625795332746,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-fc5497c4f-78mt7,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 2761ea7e-d8a2-40d3-bd8d-a2e484b0bec3,},Annotations:map[string]string{io.kubernetes.container.hash: 5d980674,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessage
Path: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b62f62a176c70a254e48450706f9b7524e717202076210725309a1e6c28138bc,PodSandboxId:ee8ad4fa44950fa02aa006da747e47186dc3d9aa497736cc81b26273582092da,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:1,},Image:&ImageSpec{Image:917d7814b9b5b870a04ef7bbee5414485a7ba8385be9c26e1df27463f6fb6557,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:917d7814b9b5b870a04ef7bbee5414485a7ba8385be9c26e1df27463f6fb6557,State:CONTAINER_RUNNING,CreatedAt:1722901592221740056,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-6c596,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a8a66d1c-c60f-4a75-8104-151faf7922b9,},Annotations:map[string]string{io.kubernetes.container.hash: 4ff057c6,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kube
rnetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6797a9b46983671da1ba766fb34f5be198a6f9d393ac2f171339c0def77c28e1,PodSandboxId:50b50402089f5ef893f2e1020443ac3740204eafd51c0bb3b0c1a95387d4a4f4,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1722901592169187259,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-v42dl,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f82457c8-44fc-476d-828b-ac33899c132b,},Annotations:map[string]string{io.kubernetes.container.hash: 66c772fe,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\
":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3b79ca8015a145db755007359177d373f8fb63ee8d261e67f64838e7af497133,PodSandboxId:f794a28dee848aa9a6e5f529ff4b0ac13bcdc20f7efc577d10f83af4d0a7f96e,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,State:CONTAINER_RUNNING,CreatedAt:1722901591997430400,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-2dnzb,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ddda1087-36af-4e82-88d3-54e6348c5e22,},Annotations:map[string]
string{io.kubernetes.container.hash: 10979fcd,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a5a92de8354098f5dc94e8aa94bb5d5aad51d11e8a6025988fd20a80568eee49,PodSandboxId:b37ed3d64fd18e7e93f99b002137d4d76775a81143bf44397c5edabd2b86fcc0,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1722901591963793102,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 71064ea8-4354-4f74-9efc-52487675def4,},Annotations:map[string]string{io.ku
bernetes.container.hash: f7b2680,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:943c42b387fc724e757ea4b76361e6a758b577b524c7c10390b65369cea51422,PodSandboxId:054f8fceeb7cdcfd41e776e32bd55b59140891b42d00369b60b9aaad1a58465c,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,State:CONTAINER_RUNNING,CreatedAt:1722901588232266700,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-multinode-342677,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2f1a0b5192f07729588fefe71f91e855,},Annotations:map[string]string{io.kubernetes.contai
ner.hash: 7337c8d9,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:268fef5c96aefcf57fc17aa09c4ebf2c737c37b5bdc83fe67a396bfa1b804384,PodSandboxId:365c7bc7a2b1ca350880d983794b4c6feb8597b97f66dfc3b8a048ac9720136c,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1722901588210215125,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-multinode-342677,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: fe9d35cc8086dee57d5df88c8a99e7d8,},Annotations:map[string]string{io.kubernetes.container.hash: f19e9654,io.kubernetes.container.r
estartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:45dfe807a437e2efa653406b8d23df5726dff92deecdb42360742ab37c64c201,PodSandboxId:ac8cb2ec338da417c69fee22ef95616b800021fb34ffc5c70712e3fcbf35a0d0,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,State:CONTAINER_RUNNING,CreatedAt:1722901588202172871,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-multinode-342677,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 20dbf724a8b080d422a73b396072a4c7,},Annotations:map[string]string{io.kubernetes.container.hash: bcb4918f,io.kubernete
s.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:bdaf3015949ce621acf67c07735918381578c9af19ebd3e5221f87a4cd2af079,PodSandboxId:3b873b27fc9d25b6c540d99c09926b8080f05d4f2343f9b81ce0b3b945380ea9,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,State:CONTAINER_RUNNING,CreatedAt:1722901588169836346,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-multinode-342677,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: efe0bdf356940b826a4ba3b020e6529c,},Annotations:map[string]string{io.kubernetes.container.hash: d82bc00b,io.kubernetes.container.re
startCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4ca4e362daa3cf637429cc280868001a87ead2a1c6b86c42ca8880864eb2b33b,PodSandboxId:cbe52e52d544551a4eaeb48f07902ba0252b2e562e0b7426cc66a20762a4a053,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_EXITED,CreatedAt:1722901260953256689,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-fc5497c4f-78mt7,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 2761ea7e-d8a2-40d3-bd8d-a2e484b0bec3,},Annotations:map[string]string{io.kubernetes.container.hash: 5d980674,io.kubernetes.container.rest
artCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:00b5601d99857e510793bf69888cd3fe706ae1919ea2fa58cd1e49e2cc8fe8f3,PodSandboxId:3d9e1ffaf8822a12d115003c1883d1817957a0bcb4e4d516649e4b91ab06ba3c,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1722901206217934742,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-v42dl,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f82457c8-44fc-476d-828b-ac33899c132b,},Annotations:map[string]string{io.kubernetes.container.hash: 66c772fe,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containe
rPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c8b4c139ba9f34045731ac7ff528df9d7aebaf6e5e682eeac1c47b5710313379,PodSandboxId:a8259144d8379242d353a86d5adec712cc26b0a08e440fabe5668e9603e2a7e1,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1722901204667548032,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.n
amespace: kube-system,io.kubernetes.pod.uid: 71064ea8-4354-4f74-9efc-52487675def4,},Annotations:map[string]string{io.kubernetes.container.hash: f7b2680,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:150ce9e294a897bb1eee154f726f0956df1618247219dcd049722c011dbe331e,PodSandboxId:0e5f1c11948a79d3e2c7d6179de4c73e196695f95a60f9892b69f6ec45c16d38,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:docker.io/kindest/kindnetd@sha256:4067b91686869e19bac601aec305ba55d2e74cdcb91347869bfb4fd3a26cd3c3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:917d7814b9b5b870a04ef7bbee5414485a7ba8385be9c26e1df27463f6fb6557,State:CONTAINER_EXITED,CreatedAt:1722901192992033098,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-6c596,io.kubernetes.pod.nam
espace: kube-system,io.kubernetes.pod.uid: a8a66d1c-c60f-4a75-8104-151faf7922b9,},Annotations:map[string]string{io.kubernetes.container.hash: 4ff057c6,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3b1d0ef18e29d3787609be51f754e7f2324ee16d19d999762bac401d079a7fd2,PodSandboxId:353f772917eac257829637e206a259eb1f44afeea71efacfab5ce5d5af8892b0,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,State:CONTAINER_EXITED,CreatedAt:1722901189044346374,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-2dnzb,io.kubernetes.pod.namespace: kube-system,io.kubernetes.
pod.uid: ddda1087-36af-4e82-88d3-54e6348c5e22,},Annotations:map[string]string{io.kubernetes.container.hash: 10979fcd,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:30e13b94e51e4836e65d865d70745d086a906658385b8b067fe0d8e69095705e,PodSandboxId:18c7ba91ecfa22ab34982fbb76f08587f43a0f966129ab03ec78113f3a756e1c,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_EXITED,CreatedAt:1722901168318903940,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-multinode-342677,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: fe9d35cc8086dee57d5df88c8a99e7d8
,},Annotations:map[string]string{io.kubernetes.container.hash: f19e9654,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f227e8cf03b66f737d02e2c7b817576ad72901aa61a0e63d337fb36ec9c32943,PodSandboxId:08f166f93ecd6c382e72917c7c4f41f7606ffe9bf055ba92daa336772820b451,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,State:CONTAINER_EXITED,CreatedAt:1722901168373329147,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-multinode-342677,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: efe0bdf356940b826a4ba3b020e6529c,},Annotations:
map[string]string{io.kubernetes.container.hash: d82bc00b,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5cc7242052f30bef2f21e600e245b76900de63c25a681c55c467489b4bb4cad9,PodSandboxId:1cfcc3af05ebb8e14183a8bcbdba732a57b181af412a613bd2bbd2579cebbef4,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,State:CONTAINER_EXITED,CreatedAt:1722901168327637273,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-multinode-342677,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2f1a0b5192f07729588fefe71f91e855,},Annotations:map[string]stri
ng{io.kubernetes.container.hash: 7337c8d9,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9d3772211d8011c9a6554ddc5569f3920bbe3050b56a031062e0557cf43be0e2,PodSandboxId:206121b53ee872114e8fe65e58499c97a566a2df9439ac8dd4a51eaa92a99fa1,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,State:CONTAINER_EXITED,CreatedAt:1722901168281340148,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-multinode-342677,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 20dbf724a8b080d422a73b396072a4c7,},Annotations:map
[string]string{io.kubernetes.container.hash: bcb4918f,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=67adab1b-ba76-47a9-a0e6-d5a2b64ed4e5 name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                 CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	a0e5a6b7ec4ce       8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a                                      3 minutes ago       Running             busybox                   1                   9c058b71634ff       busybox-fc5497c4f-78mt7
	b62f62a176c70       917d7814b9b5b870a04ef7bbee5414485a7ba8385be9c26e1df27463f6fb6557                                      4 minutes ago       Running             kindnet-cni               1                   ee8ad4fa44950       kindnet-6c596
	6797a9b469836       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4                                      4 minutes ago       Running             coredns                   1                   50b50402089f5       coredns-7db6d8ff4d-v42dl
	3b79ca8015a14       55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1                                      4 minutes ago       Running             kube-proxy                1                   f794a28dee848       kube-proxy-2dnzb
	a5a92de835409       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                      4 minutes ago       Running             storage-provisioner       1                   b37ed3d64fd18       storage-provisioner
	943c42b387fc7       3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2                                      4 minutes ago       Running             kube-scheduler            1                   054f8fceeb7cd       kube-scheduler-multinode-342677
	268fef5c96aef       3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899                                      4 minutes ago       Running             etcd                      1                   365c7bc7a2b1c       etcd-multinode-342677
	45dfe807a437e       76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e                                      4 minutes ago       Running             kube-controller-manager   1                   ac8cb2ec338da       kube-controller-manager-multinode-342677
	bdaf3015949ce       1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d                                      4 minutes ago       Running             kube-apiserver            1                   3b873b27fc9d2       kube-apiserver-multinode-342677
	4ca4e362daa3c       gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335   9 minutes ago       Exited              busybox                   0                   cbe52e52d5445       busybox-fc5497c4f-78mt7
	00b5601d99857       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4                                      10 minutes ago      Exited              coredns                   0                   3d9e1ffaf8822       coredns-7db6d8ff4d-v42dl
	c8b4c139ba9f3       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                      10 minutes ago      Exited              storage-provisioner       0                   a8259144d8379       storage-provisioner
	150ce9e294a89       docker.io/kindest/kindnetd@sha256:4067b91686869e19bac601aec305ba55d2e74cdcb91347869bfb4fd3a26cd3c3    10 minutes ago      Exited              kindnet-cni               0                   0e5f1c11948a7       kindnet-6c596
	3b1d0ef18e29d       55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1                                      10 minutes ago      Exited              kube-proxy                0                   353f772917eac       kube-proxy-2dnzb
	f227e8cf03b66       1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d                                      11 minutes ago      Exited              kube-apiserver            0                   08f166f93ecd6       kube-apiserver-multinode-342677
	5cc7242052f30       3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2                                      11 minutes ago      Exited              kube-scheduler            0                   1cfcc3af05ebb       kube-scheduler-multinode-342677
	30e13b94e51e4       3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899                                      11 minutes ago      Exited              etcd                      0                   18c7ba91ecfa2       etcd-multinode-342677
	9d3772211d801       76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e                                      11 minutes ago      Exited              kube-controller-manager   0                   206121b53ee87       kube-controller-manager-multinode-342677
	
	
	==> coredns [00b5601d99857e510793bf69888cd3fe706ae1919ea2fa58cd1e49e2cc8fe8f3] <==
	[INFO] 10.244.1.2:34450 - 3 "AAAA IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.001788569s
	[INFO] 10.244.1.2:41932 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000154255s
	[INFO] 10.244.1.2:57045 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.00012768s
	[INFO] 10.244.1.2:41817 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.001243242s
	[INFO] 10.244.1.2:41711 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000085796s
	[INFO] 10.244.1.2:41750 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000118369s
	[INFO] 10.244.1.2:59875 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000096696s
	[INFO] 10.244.0.3:44805 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.00015438s
	[INFO] 10.244.0.3:34075 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000079664s
	[INFO] 10.244.0.3:44640 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000076905s
	[INFO] 10.244.0.3:49003 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000080563s
	[INFO] 10.244.1.2:42631 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000168788s
	[INFO] 10.244.1.2:53592 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000121393s
	[INFO] 10.244.1.2:37536 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000076273s
	[INFO] 10.244.1.2:37579 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000102808s
	[INFO] 10.244.0.3:51697 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000108269s
	[INFO] 10.244.0.3:51895 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000113953s
	[INFO] 10.244.0.3:48174 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000083572s
	[INFO] 10.244.0.3:36200 - 5 "PTR IN 1.39.168.192.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.000083405s
	[INFO] 10.244.1.2:56466 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000160083s
	[INFO] 10.244.1.2:59177 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000182601s
	[INFO] 10.244.1.2:32771 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000109942s
	[INFO] 10.244.1.2:56161 - 5 "PTR IN 1.39.168.192.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.000084893s
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/health: Going into lameduck mode for 5s
	
	
	==> coredns [6797a9b46983671da1ba766fb34f5be198a6f9d393ac2f171339c0def77c28e1] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 6c8bd46af3d98e03c4ae8e438c65dd0c69a5f817565481bcf1725dd66ff794963b7938c81e3a23d4c2ad9e52f818076e819219c79e8007dd90564767ed68ba4c
	CoreDNS-1.11.1
	linux/amd64, go1.20.7, ae2bbc2
	[INFO] 127.0.0.1:57926 - 14732 "HINFO IN 843478541876508552.7132016438858336143. udp 56 false 512" NXDOMAIN qr,rd,ra 56 0.015236044s
	
	
	==> describe nodes <==
	Name:               multinode-342677
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=multinode-342677
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=a179f531dd2dbe55e0d6074abcbc378280f91bb4
	                    minikube.k8s.io/name=multinode-342677
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_08_05T23_39_35_0700
	                    minikube.k8s.io/version=v1.33.1
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 05 Aug 2024 23:39:31 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  multinode-342677
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 05 Aug 2024 23:50:36 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 05 Aug 2024 23:46:31 +0000   Mon, 05 Aug 2024 23:39:29 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 05 Aug 2024 23:46:31 +0000   Mon, 05 Aug 2024 23:39:29 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 05 Aug 2024 23:46:31 +0000   Mon, 05 Aug 2024 23:39:29 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 05 Aug 2024 23:46:31 +0000   Mon, 05 Aug 2024 23:40:04 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.10
	  Hostname:    multinode-342677
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 57f45a9d11da491e8779a6849117c573
	  System UUID:                57f45a9d-11da-491e-8779-a6849117c573
	  Boot ID:                    a21c39f9-4ec3-4075-86c4-15b50cfc820e
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.30.3
	  Kube-Proxy Version:         v1.30.3
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (9 in total)
	  Namespace                   Name                                        CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                        ------------  ----------  ---------------  -------------  ---
	  default                     busybox-fc5497c4f-78mt7                     0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         9m42s
	  kube-system                 coredns-7db6d8ff4d-v42dl                    100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     10m
	  kube-system                 etcd-multinode-342677                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (4%!)(MISSING)       0 (0%!)(MISSING)         11m
	  kube-system                 kindnet-6c596                               100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (2%!)(MISSING)        50Mi (2%!)(MISSING)      10m
	  kube-system                 kube-apiserver-multinode-342677             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         11m
	  kube-system                 kube-controller-manager-multinode-342677    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         11m
	  kube-system                 kube-proxy-2dnzb                            0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         10m
	  kube-system                 kube-scheduler-multinode-342677             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         11m
	  kube-system                 storage-provisioner                         0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         10m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                850m (42%!)(MISSING)   100m (5%!)(MISSING)
	  memory             220Mi (10%!)(MISSING)  220Mi (10%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)       0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)       0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                    From             Message
	  ----    ------                   ----                   ----             -------
	  Normal  Starting                 10m                    kube-proxy       
	  Normal  Starting                 4m6s                   kube-proxy       
	  Normal  NodeHasSufficientPID     11m                    kubelet          Node multinode-342677 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  11m                    kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  11m                    kubelet          Node multinode-342677 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    11m                    kubelet          Node multinode-342677 status is now: NodeHasNoDiskPressure
	  Normal  Starting                 11m                    kubelet          Starting kubelet.
	  Normal  RegisteredNode           10m                    node-controller  Node multinode-342677 event: Registered Node multinode-342677 in Controller
	  Normal  NodeReady                10m                    kubelet          Node multinode-342677 status is now: NodeReady
	  Normal  Starting                 4m12s                  kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  4m12s (x8 over 4m12s)  kubelet          Node multinode-342677 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    4m12s (x8 over 4m12s)  kubelet          Node multinode-342677 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     4m12s (x7 over 4m12s)  kubelet          Node multinode-342677 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  4m12s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           3m56s                  node-controller  Node multinode-342677 event: Registered Node multinode-342677 in Controller
	
	
	Name:               multinode-342677-m02
	Roles:              <none>
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=multinode-342677-m02
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=a179f531dd2dbe55e0d6074abcbc378280f91bb4
	                    minikube.k8s.io/name=multinode-342677
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_08_05T23_47_12_0700
	                    minikube.k8s.io/version=v1.33.1
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 05 Aug 2024 23:47:12 +0000
	Taints:             node.kubernetes.io/unreachable:NoExecute
	                    node.kubernetes.io/unreachable:NoSchedule
	Unschedulable:      false
	Lease:
	  HolderIdentity:  multinode-342677-m02
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 05 Aug 2024 23:48:14 +0000
	Conditions:
	  Type             Status    LastHeartbeatTime                 LastTransitionTime                Reason              Message
	  ----             ------    -----------------                 ------------------                ------              -------
	  MemoryPressure   Unknown   Mon, 05 Aug 2024 23:47:43 +0000   Mon, 05 Aug 2024 23:48:58 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  DiskPressure     Unknown   Mon, 05 Aug 2024 23:47:43 +0000   Mon, 05 Aug 2024 23:48:58 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  PIDPressure      Unknown   Mon, 05 Aug 2024 23:47:43 +0000   Mon, 05 Aug 2024 23:48:58 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  Ready            Unknown   Mon, 05 Aug 2024 23:47:43 +0000   Mon, 05 Aug 2024 23:48:58 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	Addresses:
	  InternalIP:  192.168.39.89
	  Hostname:    multinode-342677-m02
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 800cc43d422f47829d391512715fe306
	  System UUID:                800cc43d-422f-4782-9d39-1512715fe306
	  Boot ID:                    ab3dfc9b-1b31-4ead-946b-bdf9e2156dba
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.30.3
	  Kube-Proxy Version:         v1.30.3
	PodCIDR:                      10.244.1.0/24
	PodCIDRs:                     10.244.1.0/24
	Non-terminated Pods:          (3 in total)
	  Namespace                   Name                       CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                       ------------  ----------  ---------------  -------------  ---
	  default                     busybox-fc5497c4f-98dgl    0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         3m31s
	  kube-system                 kindnet-kw6xt              100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (2%!)(MISSING)        50Mi (2%!)(MISSING)      10m
	  kube-system                 kube-proxy-ktlwn           0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         10m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests   Limits
	  --------           --------   ------
	  cpu                100m (5%!)(MISSING)  100m (5%!)(MISSING)
	  memory             50Mi (2%!)(MISSING)  50Mi (2%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)     0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)     0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                    From             Message
	  ----    ------                   ----                   ----             -------
	  Normal  Starting                 3m22s                  kube-proxy       
	  Normal  Starting                 9m59s                  kube-proxy       
	  Normal  NodeHasSufficientMemory  10m (x2 over 10m)      kubelet          Node multinode-342677-m02 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    10m (x2 over 10m)      kubelet          Node multinode-342677-m02 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     10m (x2 over 10m)      kubelet          Node multinode-342677-m02 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  10m                    kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeReady                9m44s                  kubelet          Node multinode-342677-m02 status is now: NodeReady
	  Normal  NodeHasSufficientMemory  3m27s (x2 over 3m27s)  kubelet          Node multinode-342677-m02 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    3m27s (x2 over 3m27s)  kubelet          Node multinode-342677-m02 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     3m27s (x2 over 3m27s)  kubelet          Node multinode-342677-m02 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  3m27s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeReady                3m7s                   kubelet          Node multinode-342677-m02 status is now: NodeReady
	  Normal  NodeNotReady             101s                   node-controller  Node multinode-342677-m02 status is now: NodeNotReady
	
	
	==> dmesg <==
	[  +0.175173] systemd-fstab-generator[623]: Ignoring "noauto" option for root device
	[  +0.109595] systemd-fstab-generator[635]: Ignoring "noauto" option for root device
	[  +0.267886] systemd-fstab-generator[665]: Ignoring "noauto" option for root device
	[  +4.326511] systemd-fstab-generator[762]: Ignoring "noauto" option for root device
	[  +0.063755] kauditd_printk_skb: 130 callbacks suppressed
	[  +4.754069] systemd-fstab-generator[951]: Ignoring "noauto" option for root device
	[  +0.561200] kauditd_printk_skb: 46 callbacks suppressed
	[  +5.978599] systemd-fstab-generator[1285]: Ignoring "noauto" option for root device
	[  +0.086658] kauditd_printk_skb: 41 callbacks suppressed
	[ +14.181258] systemd-fstab-generator[1475]: Ignoring "noauto" option for root device
	[  +0.134859] kauditd_printk_skb: 21 callbacks suppressed
	[  +5.010819] kauditd_printk_skb: 51 callbacks suppressed
	[Aug 5 23:40] kauditd_printk_skb: 14 callbacks suppressed
	[Aug 5 23:46] systemd-fstab-generator[2805]: Ignoring "noauto" option for root device
	[  +0.150293] systemd-fstab-generator[2817]: Ignoring "noauto" option for root device
	[  +0.195896] systemd-fstab-generator[2831]: Ignoring "noauto" option for root device
	[  +0.159146] systemd-fstab-generator[2843]: Ignoring "noauto" option for root device
	[  +0.289437] systemd-fstab-generator[2871]: Ignoring "noauto" option for root device
	[  +8.138736] systemd-fstab-generator[2971]: Ignoring "noauto" option for root device
	[  +0.082286] kauditd_printk_skb: 100 callbacks suppressed
	[  +2.221062] systemd-fstab-generator[3092]: Ignoring "noauto" option for root device
	[  +4.619997] kauditd_printk_skb: 74 callbacks suppressed
	[ +12.037641] kauditd_printk_skb: 32 callbacks suppressed
	[  +4.116201] systemd-fstab-generator[3924]: Ignoring "noauto" option for root device
	[Aug 5 23:47] kauditd_printk_skb: 14 callbacks suppressed
	
	
	==> etcd [268fef5c96aefcf57fc17aa09c4ebf2c737c37b5bdc83fe67a396bfa1b804384] <==
	{"level":"info","ts":"2024-08-05T23:46:28.662085Z","caller":"fileutil/purge.go:50","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/snap","suffix":"snap","max":5,"interval":"30s"}
	{"level":"info","ts":"2024-08-05T23:46:28.662112Z","caller":"fileutil/purge.go:50","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/wal","suffix":"wal","max":5,"interval":"30s"}
	{"level":"info","ts":"2024-08-05T23:46:28.662406Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"f8926bd555ec3d0e switched to configuration voters=(17911497232019635470)"}
	{"level":"info","ts":"2024-08-05T23:46:28.664834Z","caller":"membership/cluster.go:421","msg":"added member","cluster-id":"3a710b3f69152e32","local-member-id":"f8926bd555ec3d0e","added-peer-id":"f8926bd555ec3d0e","added-peer-peer-urls":["https://192.168.39.10:2380"]}
	{"level":"info","ts":"2024-08-05T23:46:28.668002Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"3a710b3f69152e32","local-member-id":"f8926bd555ec3d0e","cluster-version":"3.5"}
	{"level":"info","ts":"2024-08-05T23:46:28.668614Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2024-08-05T23:46:28.674132Z","caller":"embed/etcd.go:726","msg":"starting with client TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
	{"level":"info","ts":"2024-08-05T23:46:28.674339Z","caller":"embed/etcd.go:277","msg":"now serving peer/client/metrics","local-member-id":"f8926bd555ec3d0e","initial-advertise-peer-urls":["https://192.168.39.10:2380"],"listen-peer-urls":["https://192.168.39.10:2380"],"advertise-client-urls":["https://192.168.39.10:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.168.39.10:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2024-08-05T23:46:28.674389Z","caller":"embed/etcd.go:857","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2024-08-05T23:46:28.674548Z","caller":"embed/etcd.go:597","msg":"serving peer traffic","address":"192.168.39.10:2380"}
	{"level":"info","ts":"2024-08-05T23:46:28.674574Z","caller":"embed/etcd.go:569","msg":"cmux::serve","address":"192.168.39.10:2380"}
	{"level":"info","ts":"2024-08-05T23:46:30.017249Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"f8926bd555ec3d0e is starting a new election at term 2"}
	{"level":"info","ts":"2024-08-05T23:46:30.017296Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"f8926bd555ec3d0e became pre-candidate at term 2"}
	{"level":"info","ts":"2024-08-05T23:46:30.017334Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"f8926bd555ec3d0e received MsgPreVoteResp from f8926bd555ec3d0e at term 2"}
	{"level":"info","ts":"2024-08-05T23:46:30.017348Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"f8926bd555ec3d0e became candidate at term 3"}
	{"level":"info","ts":"2024-08-05T23:46:30.017357Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"f8926bd555ec3d0e received MsgVoteResp from f8926bd555ec3d0e at term 3"}
	{"level":"info","ts":"2024-08-05T23:46:30.017366Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"f8926bd555ec3d0e became leader at term 3"}
	{"level":"info","ts":"2024-08-05T23:46:30.017375Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: f8926bd555ec3d0e elected leader f8926bd555ec3d0e at term 3"}
	{"level":"info","ts":"2024-08-05T23:46:30.023417Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-08-05T23:46:30.024808Z","caller":"etcdserver/server.go:2068","msg":"published local member to cluster through raft","local-member-id":"f8926bd555ec3d0e","local-member-attributes":"{Name:multinode-342677 ClientURLs:[https://192.168.39.10:2379]}","request-path":"/0/members/f8926bd555ec3d0e/attributes","cluster-id":"3a710b3f69152e32","publish-timeout":"7s"}
	{"level":"info","ts":"2024-08-05T23:46:30.025518Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.39.10:2379"}
	{"level":"info","ts":"2024-08-05T23:46:30.025741Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-08-05T23:46:30.026087Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2024-08-05T23:46:30.026101Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2024-08-05T23:46:30.027565Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	
	
	==> etcd [30e13b94e51e4836e65d865d70745d086a906658385b8b067fe0d8e69095705e] <==
	{"level":"info","ts":"2024-08-05T23:39:29.692948Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: f8926bd555ec3d0e elected leader f8926bd555ec3d0e at term 2"}
	{"level":"info","ts":"2024-08-05T23:39:29.694589Z","caller":"etcdserver/server.go:2578","msg":"setting up initial cluster version using v2 API","cluster-version":"3.5"}
	{"level":"info","ts":"2024-08-05T23:39:29.695548Z","caller":"etcdserver/server.go:2068","msg":"published local member to cluster through raft","local-member-id":"f8926bd555ec3d0e","local-member-attributes":"{Name:multinode-342677 ClientURLs:[https://192.168.39.10:2379]}","request-path":"/0/members/f8926bd555ec3d0e/attributes","cluster-id":"3a710b3f69152e32","publish-timeout":"7s"}
	{"level":"info","ts":"2024-08-05T23:39:29.696159Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"3a710b3f69152e32","local-member-id":"f8926bd555ec3d0e","cluster-version":"3.5"}
	{"level":"info","ts":"2024-08-05T23:39:29.696261Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2024-08-05T23:39:29.696301Z","caller":"etcdserver/server.go:2602","msg":"cluster version is updated","cluster-version":"3.5"}
	{"level":"info","ts":"2024-08-05T23:39:29.696329Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-08-05T23:39:29.696786Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-08-05T23:39:29.697759Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2024-08-05T23:39:29.697795Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2024-08-05T23:39:29.698571Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.39.10:2379"}
	{"level":"info","ts":"2024-08-05T23:39:29.7001Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2024-08-05T23:40:35.807248Z","caller":"traceutil/trace.go:171","msg":"trace[265089564] transaction","detail":"{read_only:false; response_revision:455; number_of_response:1; }","duration":"154.024841ms","start":"2024-08-05T23:40:35.653191Z","end":"2024-08-05T23:40:35.807216Z","steps":["trace[265089564] 'process raft request'  (duration: 153.958786ms)"],"step_count":1}
	{"level":"info","ts":"2024-08-05T23:41:30.019621Z","caller":"traceutil/trace.go:171","msg":"trace[494470632] transaction","detail":"{read_only:false; response_revision:590; number_of_response:1; }","duration":"168.29581ms","start":"2024-08-05T23:41:29.851256Z","end":"2024-08-05T23:41:30.019552Z","steps":["trace[494470632] 'process raft request'  (duration: 167.217525ms)"],"step_count":1}
	{"level":"info","ts":"2024-08-05T23:41:30.02015Z","caller":"traceutil/trace.go:171","msg":"trace[1642343311] transaction","detail":"{read_only:false; response_revision:591; number_of_response:1; }","duration":"146.040922ms","start":"2024-08-05T23:41:29.874099Z","end":"2024-08-05T23:41:30.02014Z","steps":["trace[1642343311] 'process raft request'  (duration: 145.936898ms)"],"step_count":1}
	{"level":"info","ts":"2024-08-05T23:44:44.800342Z","caller":"osutil/interrupt_unix.go:64","msg":"received signal; shutting down","signal":"terminated"}
	{"level":"info","ts":"2024-08-05T23:44:44.800456Z","caller":"embed/etcd.go:375","msg":"closing etcd server","name":"multinode-342677","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.39.10:2380"],"advertise-client-urls":["https://192.168.39.10:2379"]}
	{"level":"warn","ts":"2024-08-05T23:44:44.80061Z","caller":"embed/serve.go:212","msg":"stopping secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
	{"level":"warn","ts":"2024-08-05T23:44:44.800752Z","caller":"embed/serve.go:214","msg":"stopped secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
	{"level":"warn","ts":"2024-08-05T23:44:44.88377Z","caller":"embed/serve.go:212","msg":"stopping secure grpc server due to error","error":"accept tcp 192.168.39.10:2379: use of closed network connection"}
	{"level":"warn","ts":"2024-08-05T23:44:44.883807Z","caller":"embed/serve.go:214","msg":"stopped secure grpc server due to error","error":"accept tcp 192.168.39.10:2379: use of closed network connection"}
	{"level":"info","ts":"2024-08-05T23:44:44.88386Z","caller":"etcdserver/server.go:1471","msg":"skipped leadership transfer for single voting member cluster","local-member-id":"f8926bd555ec3d0e","current-leader-member-id":"f8926bd555ec3d0e"}
	{"level":"info","ts":"2024-08-05T23:44:44.886738Z","caller":"embed/etcd.go:579","msg":"stopping serving peer traffic","address":"192.168.39.10:2380"}
	{"level":"info","ts":"2024-08-05T23:44:44.886878Z","caller":"embed/etcd.go:584","msg":"stopped serving peer traffic","address":"192.168.39.10:2380"}
	{"level":"info","ts":"2024-08-05T23:44:44.886887Z","caller":"embed/etcd.go:377","msg":"closed etcd server","name":"multinode-342677","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.39.10:2380"],"advertise-client-urls":["https://192.168.39.10:2379"]}
	
	
	==> kernel <==
	 23:50:39 up 11 min,  0 users,  load average: 0.31, 0.23, 0.14
	Linux multinode-342677 5.10.207 #1 SMP Mon Jul 29 15:19:02 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kindnet [150ce9e294a897bb1eee154f726f0956df1618247219dcd049722c011dbe331e] <==
	I0805 23:44:04.061505       1 main.go:322] Node multinode-342677-m02 has CIDR [10.244.1.0/24] 
	I0805 23:44:14.061009       1 main.go:295] Handling node with IPs: map[192.168.39.10:{}]
	I0805 23:44:14.061050       1 main.go:299] handling current node
	I0805 23:44:14.061069       1 main.go:295] Handling node with IPs: map[192.168.39.89:{}]
	I0805 23:44:14.061077       1 main.go:322] Node multinode-342677-m02 has CIDR [10.244.1.0/24] 
	I0805 23:44:14.061213       1 main.go:295] Handling node with IPs: map[192.168.39.75:{}]
	I0805 23:44:14.061219       1 main.go:322] Node multinode-342677-m03 has CIDR [10.244.3.0/24] 
	I0805 23:44:24.056383       1 main.go:295] Handling node with IPs: map[192.168.39.10:{}]
	I0805 23:44:24.056413       1 main.go:299] handling current node
	I0805 23:44:24.056426       1 main.go:295] Handling node with IPs: map[192.168.39.89:{}]
	I0805 23:44:24.056431       1 main.go:322] Node multinode-342677-m02 has CIDR [10.244.1.0/24] 
	I0805 23:44:24.056628       1 main.go:295] Handling node with IPs: map[192.168.39.75:{}]
	I0805 23:44:24.056634       1 main.go:322] Node multinode-342677-m03 has CIDR [10.244.3.0/24] 
	I0805 23:44:34.059766       1 main.go:295] Handling node with IPs: map[192.168.39.10:{}]
	I0805 23:44:34.059819       1 main.go:299] handling current node
	I0805 23:44:34.059846       1 main.go:295] Handling node with IPs: map[192.168.39.89:{}]
	I0805 23:44:34.059851       1 main.go:322] Node multinode-342677-m02 has CIDR [10.244.1.0/24] 
	I0805 23:44:34.060045       1 main.go:295] Handling node with IPs: map[192.168.39.75:{}]
	I0805 23:44:34.060055       1 main.go:322] Node multinode-342677-m03 has CIDR [10.244.3.0/24] 
	I0805 23:44:44.063998       1 main.go:295] Handling node with IPs: map[192.168.39.10:{}]
	I0805 23:44:44.064070       1 main.go:299] handling current node
	I0805 23:44:44.064095       1 main.go:295] Handling node with IPs: map[192.168.39.89:{}]
	I0805 23:44:44.064105       1 main.go:322] Node multinode-342677-m02 has CIDR [10.244.1.0/24] 
	I0805 23:44:44.064308       1 main.go:295] Handling node with IPs: map[192.168.39.75:{}]
	I0805 23:44:44.064347       1 main.go:322] Node multinode-342677-m03 has CIDR [10.244.3.0/24] 
	
	
	==> kindnet [b62f62a176c70a254e48450706f9b7524e717202076210725309a1e6c28138bc] <==
	I0805 23:49:33.259189       1 main.go:299] handling current node
	I0805 23:49:43.258561       1 main.go:295] Handling node with IPs: map[192.168.39.10:{}]
	I0805 23:49:43.258663       1 main.go:299] handling current node
	I0805 23:49:43.258788       1 main.go:295] Handling node with IPs: map[192.168.39.89:{}]
	I0805 23:49:43.258810       1 main.go:322] Node multinode-342677-m02 has CIDR [10.244.1.0/24] 
	I0805 23:49:53.259603       1 main.go:295] Handling node with IPs: map[192.168.39.10:{}]
	I0805 23:49:53.259763       1 main.go:299] handling current node
	I0805 23:49:53.259807       1 main.go:295] Handling node with IPs: map[192.168.39.89:{}]
	I0805 23:49:53.259831       1 main.go:322] Node multinode-342677-m02 has CIDR [10.244.1.0/24] 
	I0805 23:50:03.258730       1 main.go:295] Handling node with IPs: map[192.168.39.10:{}]
	I0805 23:50:03.258826       1 main.go:299] handling current node
	I0805 23:50:03.258852       1 main.go:295] Handling node with IPs: map[192.168.39.89:{}]
	I0805 23:50:03.258858       1 main.go:322] Node multinode-342677-m02 has CIDR [10.244.1.0/24] 
	I0805 23:50:13.259260       1 main.go:295] Handling node with IPs: map[192.168.39.10:{}]
	I0805 23:50:13.259356       1 main.go:299] handling current node
	I0805 23:50:13.259384       1 main.go:295] Handling node with IPs: map[192.168.39.89:{}]
	I0805 23:50:13.259402       1 main.go:322] Node multinode-342677-m02 has CIDR [10.244.1.0/24] 
	I0805 23:50:23.258916       1 main.go:295] Handling node with IPs: map[192.168.39.10:{}]
	I0805 23:50:23.258959       1 main.go:299] handling current node
	I0805 23:50:23.258973       1 main.go:295] Handling node with IPs: map[192.168.39.89:{}]
	I0805 23:50:23.258978       1 main.go:322] Node multinode-342677-m02 has CIDR [10.244.1.0/24] 
	I0805 23:50:33.258899       1 main.go:295] Handling node with IPs: map[192.168.39.10:{}]
	I0805 23:50:33.258970       1 main.go:299] handling current node
	I0805 23:50:33.258990       1 main.go:295] Handling node with IPs: map[192.168.39.89:{}]
	I0805 23:50:33.258997       1 main.go:322] Node multinode-342677-m02 has CIDR [10.244.1.0/24] 
	
	
	==> kube-apiserver [bdaf3015949ce621acf67c07735918381578c9af19ebd3e5221f87a4cd2af079] <==
	I0805 23:46:31.370626       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I0805 23:46:31.377518       1 shared_informer.go:320] Caches are synced for cluster_authentication_trust_controller
	I0805 23:46:31.377911       1 cache.go:39] Caches are synced for AvailableConditionController controller
	I0805 23:46:31.382906       1 shared_informer.go:320] Caches are synced for crd-autoregister
	I0805 23:46:31.390519       1 aggregator.go:165] initial CRD sync complete...
	I0805 23:46:31.390616       1 autoregister_controller.go:141] Starting autoregister controller
	I0805 23:46:31.390655       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I0805 23:46:31.390729       1 cache.go:39] Caches are synced for autoregister controller
	I0805 23:46:31.393736       1 shared_informer.go:320] Caches are synced for configmaps
	I0805 23:46:31.393867       1 apf_controller.go:379] Running API Priority and Fairness config worker
	I0805 23:46:31.393892       1 apf_controller.go:382] Running API Priority and Fairness periodic rebalancing process
	I0805 23:46:31.404845       1 handler_discovery.go:447] Starting ResourceDiscoveryManager
	E0805 23:46:31.428128       1 controller.go:97] Error removing old endpoints from kubernetes service: no API server IP addresses were listed in storage, refusing to erase all endpoints for the kubernetes Service
	I0805 23:46:31.442972       1 shared_informer.go:320] Caches are synced for node_authorizer
	I0805 23:46:31.454267       1 shared_informer.go:320] Caches are synced for *generic.policySource[*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicy,*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicyBinding,k8s.io/apiserver/pkg/admission/plugin/policy/validating.Validator]
	I0805 23:46:31.454382       1 policy_source.go:224] refreshing policies
	I0805 23:46:31.469376       1 controller.go:615] quota admission added evaluator for: leases.coordination.k8s.io
	I0805 23:46:32.279308       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I0805 23:46:33.429581       1 controller.go:615] quota admission added evaluator for: daemonsets.apps
	I0805 23:46:33.553178       1 controller.go:615] quota admission added evaluator for: serviceaccounts
	I0805 23:46:33.568025       1 controller.go:615] quota admission added evaluator for: deployments.apps
	I0805 23:46:33.646975       1 controller.go:615] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I0805 23:46:33.653566       1 controller.go:615] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I0805 23:46:43.869066       1 controller.go:615] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I0805 23:46:44.022724       1 controller.go:615] quota admission added evaluator for: endpoints
	
	
	==> kube-apiserver [f227e8cf03b66f737d02e2c7b817576ad72901aa61a0e63d337fb36ec9c32943] <==
	E0805 23:44:44.825501       1 watcher.go:342] watch chan error: rpc error: code = Unknown desc = malformed header: missing HTTP content-type
	E0805 23:44:44.825570       1 watcher.go:342] watch chan error: rpc error: code = Unknown desc = malformed header: missing HTTP content-type
	E0805 23:44:44.825655       1 watcher.go:342] watch chan error: rpc error: code = Unknown desc = malformed header: missing HTTP content-type
	W0805 23:44:44.825789       1 logging.go:59] [core] [Channel #64 SubChannel #65] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	E0805 23:44:44.825798       1 watcher.go:342] watch chan error: rpc error: code = Unknown desc = malformed header: missing HTTP content-type
	W0805 23:44:44.825841       1 logging.go:59] [core] [Channel #22 SubChannel #23] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0805 23:44:44.825877       1 logging.go:59] [core] [Channel #106 SubChannel #107] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	E0805 23:44:44.825877       1 watcher.go:342] watch chan error: rpc error: code = Unknown desc = malformed header: missing HTTP content-type
	W0805 23:44:44.825909       1 logging.go:59] [core] [Channel #76 SubChannel #77] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0805 23:44:44.825944       1 logging.go:59] [core] [Channel #85 SubChannel #86] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0805 23:44:44.826339       1 logging.go:59] [core] [Channel #97 SubChannel #98] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0805 23:44:44.826425       1 logging.go:59] [core] [Channel #181 SubChannel #182] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0805 23:44:44.826540       1 logging.go:59] [core] [Channel #61 SubChannel #62] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0805 23:44:44.826576       1 logging.go:59] [core] [Channel #118 SubChannel #119] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0805 23:44:44.826610       1 logging.go:59] [core] [Channel #115 SubChannel #116] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0805 23:44:44.826641       1 logging.go:59] [core] [Channel #133 SubChannel #134] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0805 23:44:44.826739       1 logging.go:59] [core] [Channel #46 SubChannel #47] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0805 23:44:44.826776       1 logging.go:59] [core] [Channel #139 SubChannel #140] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0805 23:44:44.826808       1 logging.go:59] [core] [Channel #124 SubChannel #125] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0805 23:44:44.826866       1 logging.go:59] [core] [Channel #145 SubChannel #146] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0805 23:44:44.826899       1 logging.go:59] [core] [Channel #130 SubChannel #131] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0805 23:44:44.826928       1 logging.go:59] [core] [Channel #160 SubChannel #161] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0805 23:44:44.826958       1 logging.go:59] [core] [Channel #28 SubChannel #29] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0805 23:44:44.826988       1 logging.go:59] [core] [Channel #103 SubChannel #104] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0805 23:44:44.827027       1 logging.go:59] [core] [Channel #169 SubChannel #170] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	
	
	==> kube-controller-manager [45dfe807a437e2efa653406b8d23df5726dff92deecdb42360742ab37c64c201] <==
	I0805 23:47:12.525088       1 range_allocator.go:381] "Set node PodCIDR" logger="node-ipam-controller" node="multinode-342677-m02" podCIDRs=["10.244.1.0/24"]
	I0805 23:47:14.287153       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="72.315µs"
	I0805 23:47:14.429188       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="45.535µs"
	I0805 23:47:14.442218       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="43.046µs"
	I0805 23:47:14.452449       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="44.912µs"
	I0805 23:47:14.498292       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="42.556µs"
	I0805 23:47:14.505810       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="41.066µs"
	I0805 23:47:14.507821       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="49.872µs"
	I0805 23:47:32.310853       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-342677-m02"
	I0805 23:47:32.332018       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="47.231µs"
	I0805 23:47:32.347365       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="42.798µs"
	I0805 23:47:35.696457       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="5.696021ms"
	I0805 23:47:35.696612       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="46.613µs"
	I0805 23:47:51.403049       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-342677-m02"
	I0805 23:47:52.584099       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-342677-m02"
	I0805 23:47:52.584326       1 actual_state_of_world.go:543] "Failed to update statusUpdateNeeded field in actual state of world" logger="persistentvolume-attach-detach-controller" err="Failed to set statusUpdateNeeded to needed true, because nodeName=\"multinode-342677-m03\" does not exist"
	I0805 23:47:52.612588       1 range_allocator.go:381] "Set node PodCIDR" logger="node-ipam-controller" node="multinode-342677-m03" podCIDRs=["10.244.2.0/24"]
	I0805 23:48:12.229963       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-342677-m02"
	I0805 23:48:17.684091       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-342677-m02"
	I0805 23:48:59.012439       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="13.312843ms"
	I0805 23:48:59.012959       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="35.913µs"
	I0805 23:49:03.797373       1 gc_controller.go:344] "PodGC is force deleting Pod" logger="pod-garbage-collector-controller" pod="kube-system/kube-proxy-rqbsd"
	I0805 23:49:03.825732       1 gc_controller.go:260] "Forced deletion of orphaned Pod succeeded" logger="pod-garbage-collector-controller" pod="kube-system/kube-proxy-rqbsd"
	I0805 23:49:03.825806       1 gc_controller.go:344] "PodGC is force deleting Pod" logger="pod-garbage-collector-controller" pod="kube-system/kindnet-rbtpm"
	I0805 23:49:03.851396       1 gc_controller.go:260] "Forced deletion of orphaned Pod succeeded" logger="pod-garbage-collector-controller" pod="kube-system/kindnet-rbtpm"
	
	
	==> kube-controller-manager [9d3772211d8011c9a6554ddc5569f3920bbe3050b56a031062e0557cf43be0e2] <==
	I0805 23:40:35.812173       1 actual_state_of_world.go:543] "Failed to update statusUpdateNeeded field in actual state of world" logger="persistentvolume-attach-detach-controller" err="Failed to set statusUpdateNeeded to needed true, because nodeName=\"multinode-342677-m02\" does not exist"
	I0805 23:40:35.824751       1 range_allocator.go:381] "Set node PodCIDR" logger="node-ipam-controller" node="multinode-342677-m02" podCIDRs=["10.244.1.0/24"]
	I0805 23:40:37.426743       1 node_lifecycle_controller.go:879] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="multinode-342677-m02"
	I0805 23:40:55.664315       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-342677-m02"
	I0805 23:40:57.947331       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="69.865169ms"
	I0805 23:40:57.958606       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="10.839024ms"
	I0805 23:40:57.959046       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="55.101µs"
	I0805 23:40:57.959269       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="33.026µs"
	I0805 23:41:01.262803       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="8.33961ms"
	I0805 23:41:01.263863       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="46.513µs"
	I0805 23:41:01.856846       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="9.622747ms"
	I0805 23:41:01.858076       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="54.259µs"
	I0805 23:41:30.022443       1 actual_state_of_world.go:543] "Failed to update statusUpdateNeeded field in actual state of world" logger="persistentvolume-attach-detach-controller" err="Failed to set statusUpdateNeeded to needed true, because nodeName=\"multinode-342677-m03\" does not exist"
	I0805 23:41:30.022595       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-342677-m02"
	I0805 23:41:30.077932       1 range_allocator.go:381] "Set node PodCIDR" logger="node-ipam-controller" node="multinode-342677-m03" podCIDRs=["10.244.2.0/24"]
	I0805 23:41:32.450638       1 node_lifecycle_controller.go:879] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="multinode-342677-m03"
	I0805 23:41:50.857843       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-342677-m02"
	I0805 23:42:19.291048       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-342677-m02"
	I0805 23:42:20.251487       1 actual_state_of_world.go:543] "Failed to update statusUpdateNeeded field in actual state of world" logger="persistentvolume-attach-detach-controller" err="Failed to set statusUpdateNeeded to needed true, because nodeName=\"multinode-342677-m03\" does not exist"
	I0805 23:42:20.251540       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-342677-m02"
	I0805 23:42:20.260213       1 range_allocator.go:381] "Set node PodCIDR" logger="node-ipam-controller" node="multinode-342677-m03" podCIDRs=["10.244.3.0/24"]
	I0805 23:42:39.087270       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-342677-m03"
	I0805 23:43:22.507117       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-342677-m02"
	I0805 23:43:22.571737       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="11.810912ms"
	I0805 23:43:22.571981       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="35.744µs"
	
	
	==> kube-proxy [3b1d0ef18e29d3787609be51f754e7f2324ee16d19d999762bac401d079a7fd2] <==
	I0805 23:39:49.263232       1 server_linux.go:69] "Using iptables proxy"
	I0805 23:39:49.288380       1 server.go:1062] "Successfully retrieved node IP(s)" IPs=["192.168.39.10"]
	I0805 23:39:49.342033       1 server_linux.go:143] "No iptables support for family" ipFamily="IPv6"
	I0805 23:39:49.342067       1 server.go:661] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0805 23:39:49.342083       1 server_linux.go:165] "Using iptables Proxier"
	I0805 23:39:49.345380       1 proxier.go:243] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I0805 23:39:49.346085       1 server.go:872] "Version info" version="v1.30.3"
	I0805 23:39:49.346132       1 server.go:874] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0805 23:39:49.349428       1 config.go:192] "Starting service config controller"
	I0805 23:39:49.349983       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0805 23:39:49.350044       1 config.go:101] "Starting endpoint slice config controller"
	I0805 23:39:49.350062       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0805 23:39:49.350895       1 config.go:319] "Starting node config controller"
	I0805 23:39:49.350933       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0805 23:39:49.450806       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I0805 23:39:49.450833       1 shared_informer.go:320] Caches are synced for service config
	I0805 23:39:49.451453       1 shared_informer.go:320] Caches are synced for node config
	
	
	==> kube-proxy [3b79ca8015a145db755007359177d373f8fb63ee8d261e67f64838e7af497133] <==
	I0805 23:46:32.266088       1 server_linux.go:69] "Using iptables proxy"
	I0805 23:46:32.300138       1 server.go:1062] "Successfully retrieved node IP(s)" IPs=["192.168.39.10"]
	I0805 23:46:32.408626       1 server_linux.go:143] "No iptables support for family" ipFamily="IPv6"
	I0805 23:46:32.409281       1 server.go:661] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0805 23:46:32.409364       1 server_linux.go:165] "Using iptables Proxier"
	I0805 23:46:32.414603       1 proxier.go:243] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I0805 23:46:32.414874       1 server.go:872] "Version info" version="v1.30.3"
	I0805 23:46:32.414903       1 server.go:874] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0805 23:46:32.417649       1 config.go:192] "Starting service config controller"
	I0805 23:46:32.417744       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0805 23:46:32.419444       1 config.go:101] "Starting endpoint slice config controller"
	I0805 23:46:32.419468       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0805 23:46:32.420533       1 config.go:319] "Starting node config controller"
	I0805 23:46:32.420560       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0805 23:46:32.520015       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I0805 23:46:32.520101       1 shared_informer.go:320] Caches are synced for service config
	I0805 23:46:32.520820       1 shared_informer.go:320] Caches are synced for node config
	
	
	==> kube-scheduler [5cc7242052f30bef2f21e600e245b76900de63c25a681c55c467489b4bb4cad9] <==
	E0805 23:39:32.056172       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	W0805 23:39:32.133509       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0805 23:39:32.133770       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	W0805 23:39:32.170758       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E0805 23:39:32.170855       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	W0805 23:39:32.193853       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E0805 23:39:32.193884       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	W0805 23:39:32.259144       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0805 23:39:32.259248       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	W0805 23:39:32.275972       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0805 23:39:32.276105       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	W0805 23:39:32.303521       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E0805 23:39:32.303618       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	W0805 23:39:32.316171       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0805 23:39:32.316214       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	W0805 23:39:32.362285       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0805 23:39:32.362333       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	W0805 23:39:32.363895       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0805 23:39:32.363967       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	W0805 23:39:32.395077       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0805 23:39:32.395164       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	W0805 23:39:32.554465       1 reflector.go:547] runtime/asm_amd64.s:1695: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0805 23:39:32.554867       1 reflector.go:150] runtime/asm_amd64.s:1695: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	I0805 23:39:35.184013       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	E0805 23:44:44.816277       1 run.go:74] "command failed" err="finished without leader elect"
	
	
	==> kube-scheduler [943c42b387fc724e757ea4b76361e6a758b577b524c7c10390b65369cea51422] <==
	I0805 23:46:29.145080       1 serving.go:380] Generated self-signed cert in-memory
	W0805 23:46:31.328860       1 requestheader_controller.go:193] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W0805 23:46:31.329016       1 authentication.go:368] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W0805 23:46:31.329153       1 authentication.go:369] Continuing without authentication configuration. This may treat all requests as anonymous.
	W0805 23:46:31.329351       1 authentication.go:370] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I0805 23:46:31.406082       1 server.go:154] "Starting Kubernetes Scheduler" version="v1.30.3"
	I0805 23:46:31.406145       1 server.go:156] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0805 23:46:31.410413       1 secure_serving.go:213] Serving securely on 127.0.0.1:10259
	I0805 23:46:31.410786       1 configmap_cafile_content.go:202] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0805 23:46:31.413769       1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0805 23:46:31.413892       1 tlsconfig.go:240] "Starting DynamicServingCertificateController"
	I0805 23:46:31.515919       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Aug 05 23:46:31 multinode-342677 kubelet[3099]: I0805 23:46:31.577087    3099 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/a8a66d1c-c60f-4a75-8104-151faf7922b9-xtables-lock\") pod \"kindnet-6c596\" (UID: \"a8a66d1c-c60f-4a75-8104-151faf7922b9\") " pod="kube-system/kindnet-6c596"
	Aug 05 23:46:31 multinode-342677 kubelet[3099]: I0805 23:46:31.577371    3099 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/a8a66d1c-c60f-4a75-8104-151faf7922b9-lib-modules\") pod \"kindnet-6c596\" (UID: \"a8a66d1c-c60f-4a75-8104-151faf7922b9\") " pod="kube-system/kindnet-6c596"
	Aug 05 23:46:31 multinode-342677 kubelet[3099]: I0805 23:46:31.577581    3099 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/host-path/71064ea8-4354-4f74-9efc-52487675def4-tmp\") pod \"storage-provisioner\" (UID: \"71064ea8-4354-4f74-9efc-52487675def4\") " pod="kube-system/storage-provisioner"
	Aug 05 23:46:31 multinode-342677 kubelet[3099]: I0805 23:46:31.578286    3099 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/ddda1087-36af-4e82-88d3-54e6348c5e22-xtables-lock\") pod \"kube-proxy-2dnzb\" (UID: \"ddda1087-36af-4e82-88d3-54e6348c5e22\") " pod="kube-system/kube-proxy-2dnzb"
	Aug 05 23:46:36 multinode-342677 kubelet[3099]: I0805 23:46:36.161051    3099 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness"
	Aug 05 23:47:27 multinode-342677 kubelet[3099]: E0805 23:47:27.535433    3099 iptables.go:577] "Could not set up iptables canary" err=<
	Aug 05 23:47:27 multinode-342677 kubelet[3099]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Aug 05 23:47:27 multinode-342677 kubelet[3099]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Aug 05 23:47:27 multinode-342677 kubelet[3099]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Aug 05 23:47:27 multinode-342677 kubelet[3099]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Aug 05 23:48:27 multinode-342677 kubelet[3099]: E0805 23:48:27.535604    3099 iptables.go:577] "Could not set up iptables canary" err=<
	Aug 05 23:48:27 multinode-342677 kubelet[3099]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Aug 05 23:48:27 multinode-342677 kubelet[3099]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Aug 05 23:48:27 multinode-342677 kubelet[3099]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Aug 05 23:48:27 multinode-342677 kubelet[3099]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Aug 05 23:49:27 multinode-342677 kubelet[3099]: E0805 23:49:27.537038    3099 iptables.go:577] "Could not set up iptables canary" err=<
	Aug 05 23:49:27 multinode-342677 kubelet[3099]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Aug 05 23:49:27 multinode-342677 kubelet[3099]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Aug 05 23:49:27 multinode-342677 kubelet[3099]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Aug 05 23:49:27 multinode-342677 kubelet[3099]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Aug 05 23:50:27 multinode-342677 kubelet[3099]: E0805 23:50:27.535606    3099 iptables.go:577] "Could not set up iptables canary" err=<
	Aug 05 23:50:27 multinode-342677 kubelet[3099]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Aug 05 23:50:27 multinode-342677 kubelet[3099]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Aug 05 23:50:27 multinode-342677 kubelet[3099]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Aug 05 23:50:27 multinode-342677 kubelet[3099]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	

                                                
                                                
-- /stdout --
** stderr ** 
	E0805 23:50:38.487192   49874 logs.go:258] failed to output last start logs: failed to read file /home/jenkins/minikube-integration/19373-9606/.minikube/logs/lastStart.txt: bufio.Scanner: token too long

                                                
                                                
** /stderr **
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p multinode-342677 -n multinode-342677
helpers_test.go:261: (dbg) Run:  kubectl --context multinode-342677 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestMultiNode/serial/StopMultiNode FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
--- FAIL: TestMultiNode/serial/StopMultiNode (141.47s)

                                                
                                    
x
+
TestPreload (356.07s)

                                                
                                                
=== RUN   TestPreload
preload_test.go:44: (dbg) Run:  out/minikube-linux-amd64 start -p test-preload-314505 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.24.4
E0806 00:03:16.354053   16792 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19373-9606/.minikube/profiles/addons-435364/client.crt: no such file or directory
preload_test.go:44: (dbg) Done: out/minikube-linux-amd64 start -p test-preload-314505 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.24.4: (3m33.505066084s)
preload_test.go:52: (dbg) Run:  out/minikube-linux-amd64 -p test-preload-314505 image pull gcr.io/k8s-minikube/busybox
preload_test.go:52: (dbg) Done: out/minikube-linux-amd64 -p test-preload-314505 image pull gcr.io/k8s-minikube/busybox: (2.547972051s)
preload_test.go:58: (dbg) Run:  out/minikube-linux-amd64 stop -p test-preload-314505
E0806 00:06:49.981562   16792 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19373-9606/.minikube/profiles/functional-299463/client.crt: no such file or directory
preload_test.go:58: (dbg) Non-zero exit: out/minikube-linux-amd64 stop -p test-preload-314505: exit status 82 (2m0.462792033s)

                                                
                                                
-- stdout --
	* Stopping node "test-preload-314505"  ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_STOP_TIMEOUT: Unable to stop VM: Temporary Error: stop: unable to stop vm, current state "Running"
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_stop_24ef9d461bcadf056806ccb9bba8d5f9f54754a6_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
preload_test.go:60: out/minikube-linux-amd64 stop -p test-preload-314505 failed: exit status 82
panic.go:626: *** TestPreload FAILED at 2024-08-06 00:07:37.066544548 +0000 UTC m=+4794.478981056
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p test-preload-314505 -n test-preload-314505
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p test-preload-314505 -n test-preload-314505: exit status 3 (18.656181975s)

                                                
                                                
-- stdout --
	Error

                                                
                                                
-- /stdout --
** stderr ** 
	E0806 00:07:55.719383   54587 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.39.28:22: connect: no route to host
	E0806 00:07:55.719403   54587 status.go:249] status error: NewSession: new client: new client: dial tcp 192.168.39.28:22: connect: no route to host

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 3 (may be ok)
helpers_test.go:241: "test-preload-314505" host is not running, skipping log retrieval (state="Error")
helpers_test.go:175: Cleaning up "test-preload-314505" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p test-preload-314505
--- FAIL: TestPreload (356.07s)

                                                
                                    
x
+
TestKubernetesUpgrade (389.35s)

                                                
                                                
=== RUN   TestKubernetesUpgrade
=== PAUSE TestKubernetesUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestKubernetesUpgrade
version_upgrade_test.go:222: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-907863 --memory=2200 --kubernetes-version=v1.20.0 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio
version_upgrade_test.go:222: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p kubernetes-upgrade-907863 --memory=2200 --kubernetes-version=v1.20.0 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio: exit status 109 (4m40.468452997s)

                                                
                                                
-- stdout --
	* [kubernetes-upgrade-907863] minikube v1.33.1 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=19373
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/19373-9606/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/19373-9606/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the kvm2 driver based on user configuration
	* Starting "kubernetes-upgrade-907863" primary control-plane node in "kubernetes-upgrade-907863" cluster
	* Creating kvm2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	* Preparing Kubernetes v1.20.0 on CRI-O 1.29.1 ...
	  - Generating certificates and keys ...
	  - Booting up control plane ...
	  - Generating certificates and keys ...
	  - Booting up control plane ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0806 00:14:11.819521   61720 out.go:291] Setting OutFile to fd 1 ...
	I0806 00:14:11.819858   61720 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0806 00:14:11.819871   61720 out.go:304] Setting ErrFile to fd 2...
	I0806 00:14:11.819878   61720 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0806 00:14:11.820169   61720 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19373-9606/.minikube/bin
	I0806 00:14:11.820954   61720 out.go:298] Setting JSON to false
	I0806 00:14:11.822182   61720 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-10","uptime":6998,"bootTime":1722896254,"procs":208,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1065-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0806 00:14:11.822260   61720 start.go:139] virtualization: kvm guest
	I0806 00:14:11.824712   61720 out.go:177] * [kubernetes-upgrade-907863] minikube v1.33.1 on Ubuntu 20.04 (kvm/amd64)
	I0806 00:14:11.826680   61720 out.go:177]   - MINIKUBE_LOCATION=19373
	I0806 00:14:11.826689   61720 notify.go:220] Checking for updates...
	I0806 00:14:11.829616   61720 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0806 00:14:11.831075   61720 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19373-9606/kubeconfig
	I0806 00:14:11.832787   61720 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19373-9606/.minikube
	I0806 00:14:11.834296   61720 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0806 00:14:11.835851   61720 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0806 00:14:11.837501   61720 config.go:182] Loaded profile config "cert-expiration-272169": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0806 00:14:11.837600   61720 config.go:182] Loaded profile config "cert-options-323157": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0806 00:14:11.837703   61720 config.go:182] Loaded profile config "pause-161508": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0806 00:14:11.837770   61720 driver.go:392] Setting default libvirt URI to qemu:///system
	I0806 00:14:11.875248   61720 out.go:177] * Using the kvm2 driver based on user configuration
	I0806 00:14:11.876704   61720 start.go:297] selected driver: kvm2
	I0806 00:14:11.876717   61720 start.go:901] validating driver "kvm2" against <nil>
	I0806 00:14:11.876728   61720 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0806 00:14:11.877412   61720 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0806 00:14:11.877489   61720 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/19373-9606/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0806 00:14:11.894228   61720 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.33.1
	I0806 00:14:11.894283   61720 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0806 00:14:11.894476   61720 start_flags.go:929] Wait components to verify : map[apiserver:true system_pods:true]
	I0806 00:14:11.894536   61720 cni.go:84] Creating CNI manager for ""
	I0806 00:14:11.894549   61720 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0806 00:14:11.894555   61720 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0806 00:14:11.894602   61720 start.go:340] cluster config:
	{Name:kubernetes-upgrade-907863 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:kubernetes-upgrade-907863 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.
local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0806 00:14:11.894704   61720 iso.go:125] acquiring lock: {Name:mk54a637ed625e04bb2b6adf973b61c976cd6d35 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0806 00:14:11.896498   61720 out.go:177] * Starting "kubernetes-upgrade-907863" primary control-plane node in "kubernetes-upgrade-907863" cluster
	I0806 00:14:11.897775   61720 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime crio
	I0806 00:14:11.897828   61720 preload.go:146] Found local preload: /home/jenkins/minikube-integration/19373-9606/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4
	I0806 00:14:11.897841   61720 cache.go:56] Caching tarball of preloaded images
	I0806 00:14:11.897969   61720 preload.go:172] Found /home/jenkins/minikube-integration/19373-9606/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0806 00:14:11.897985   61720 cache.go:59] Finished verifying existence of preloaded tar for v1.20.0 on crio
	I0806 00:14:11.898085   61720 profile.go:143] Saving config to /home/jenkins/minikube-integration/19373-9606/.minikube/profiles/kubernetes-upgrade-907863/config.json ...
	I0806 00:14:11.898103   61720 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19373-9606/.minikube/profiles/kubernetes-upgrade-907863/config.json: {Name:mkaeba9202169f9368a528cadd42af068d3b8b79 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0806 00:14:11.898383   61720 start.go:360] acquireMachinesLock for kubernetes-upgrade-907863: {Name:mkd2ba511c39504598222edbf83078b718329186 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0806 00:14:22.464198   61720 start.go:364] duration metric: took 10.565782258s to acquireMachinesLock for "kubernetes-upgrade-907863"
	I0806 00:14:22.464286   61720 start.go:93] Provisioning new machine with config: &{Name:kubernetes-upgrade-907863 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19339/minikube-v1.33.1-1722248113-19339-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesC
onfig:{KubernetesVersion:v1.20.0 ClusterName:kubernetes-upgrade-907863 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableO
ptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0806 00:14:22.464404   61720 start.go:125] createHost starting for "" (driver="kvm2")
	I0806 00:14:22.466747   61720 out.go:204] * Creating kvm2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0806 00:14:22.466934   61720 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0806 00:14:22.466990   61720 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0806 00:14:22.486340   61720 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38979
	I0806 00:14:22.486775   61720 main.go:141] libmachine: () Calling .GetVersion
	I0806 00:14:22.487331   61720 main.go:141] libmachine: Using API Version  1
	I0806 00:14:22.487346   61720 main.go:141] libmachine: () Calling .SetConfigRaw
	I0806 00:14:22.487711   61720 main.go:141] libmachine: () Calling .GetMachineName
	I0806 00:14:22.487911   61720 main.go:141] libmachine: (kubernetes-upgrade-907863) Calling .GetMachineName
	I0806 00:14:22.488146   61720 main.go:141] libmachine: (kubernetes-upgrade-907863) Calling .DriverName
	I0806 00:14:22.488337   61720 start.go:159] libmachine.API.Create for "kubernetes-upgrade-907863" (driver="kvm2")
	I0806 00:14:22.488366   61720 client.go:168] LocalClient.Create starting
	I0806 00:14:22.488406   61720 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/19373-9606/.minikube/certs/ca.pem
	I0806 00:14:22.488448   61720 main.go:141] libmachine: Decoding PEM data...
	I0806 00:14:22.488472   61720 main.go:141] libmachine: Parsing certificate...
	I0806 00:14:22.488542   61720 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/19373-9606/.minikube/certs/cert.pem
	I0806 00:14:22.488568   61720 main.go:141] libmachine: Decoding PEM data...
	I0806 00:14:22.488589   61720 main.go:141] libmachine: Parsing certificate...
	I0806 00:14:22.488613   61720 main.go:141] libmachine: Running pre-create checks...
	I0806 00:14:22.488631   61720 main.go:141] libmachine: (kubernetes-upgrade-907863) Calling .PreCreateCheck
	I0806 00:14:22.489048   61720 main.go:141] libmachine: (kubernetes-upgrade-907863) Calling .GetConfigRaw
	I0806 00:14:22.489541   61720 main.go:141] libmachine: Creating machine...
	I0806 00:14:22.489557   61720 main.go:141] libmachine: (kubernetes-upgrade-907863) Calling .Create
	I0806 00:14:22.489697   61720 main.go:141] libmachine: (kubernetes-upgrade-907863) Creating KVM machine...
	I0806 00:14:22.491306   61720 main.go:141] libmachine: (kubernetes-upgrade-907863) DBG | found existing default KVM network
	I0806 00:14:22.492929   61720 main.go:141] libmachine: (kubernetes-upgrade-907863) DBG | I0806 00:14:22.492755   61806 network.go:211] skipping subnet 192.168.39.0/24 that is taken: &{IP:192.168.39.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.39.0/24 Gateway:192.168.39.1 ClientMin:192.168.39.2 ClientMax:192.168.39.254 Broadcast:192.168.39.255 IsPrivate:true Interface:{IfaceName:virbr3 IfaceIPv4:192.168.39.1 IfaceMTU:1500 IfaceMAC:52:54:00:48:19:b6} reservation:<nil>}
	I0806 00:14:22.493945   61720 main.go:141] libmachine: (kubernetes-upgrade-907863) DBG | I0806 00:14:22.493866   61806 network.go:211] skipping subnet 192.168.50.0/24 that is taken: &{IP:192.168.50.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.50.0/24 Gateway:192.168.50.1 ClientMin:192.168.50.2 ClientMax:192.168.50.254 Broadcast:192.168.50.255 IsPrivate:true Interface:{IfaceName:virbr2 IfaceIPv4:192.168.50.1 IfaceMTU:1500 IfaceMAC:52:54:00:9b:c7:ec} reservation:<nil>}
	I0806 00:14:22.494843   61720 main.go:141] libmachine: (kubernetes-upgrade-907863) DBG | I0806 00:14:22.494762   61806 network.go:211] skipping subnet 192.168.61.0/24 that is taken: &{IP:192.168.61.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.61.0/24 Gateway:192.168.61.1 ClientMin:192.168.61.2 ClientMax:192.168.61.254 Broadcast:192.168.61.255 IsPrivate:true Interface:{IfaceName:virbr1 IfaceIPv4:192.168.61.1 IfaceMTU:1500 IfaceMAC:52:54:00:82:7a:db} reservation:<nil>}
	I0806 00:14:22.495947   61720 main.go:141] libmachine: (kubernetes-upgrade-907863) DBG | I0806 00:14:22.495863   61806 network.go:206] using free private subnet 192.168.72.0/24: &{IP:192.168.72.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.72.0/24 Gateway:192.168.72.1 ClientMin:192.168.72.2 ClientMax:192.168.72.254 Broadcast:192.168.72.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc0003c25e0}
	I0806 00:14:22.495971   61720 main.go:141] libmachine: (kubernetes-upgrade-907863) DBG | created network xml: 
	I0806 00:14:22.495988   61720 main.go:141] libmachine: (kubernetes-upgrade-907863) DBG | <network>
	I0806 00:14:22.496002   61720 main.go:141] libmachine: (kubernetes-upgrade-907863) DBG |   <name>mk-kubernetes-upgrade-907863</name>
	I0806 00:14:22.496024   61720 main.go:141] libmachine: (kubernetes-upgrade-907863) DBG |   <dns enable='no'/>
	I0806 00:14:22.496039   61720 main.go:141] libmachine: (kubernetes-upgrade-907863) DBG |   
	I0806 00:14:22.496069   61720 main.go:141] libmachine: (kubernetes-upgrade-907863) DBG |   <ip address='192.168.72.1' netmask='255.255.255.0'>
	I0806 00:14:22.496080   61720 main.go:141] libmachine: (kubernetes-upgrade-907863) DBG |     <dhcp>
	I0806 00:14:22.496091   61720 main.go:141] libmachine: (kubernetes-upgrade-907863) DBG |       <range start='192.168.72.2' end='192.168.72.253'/>
	I0806 00:14:22.496101   61720 main.go:141] libmachine: (kubernetes-upgrade-907863) DBG |     </dhcp>
	I0806 00:14:22.496112   61720 main.go:141] libmachine: (kubernetes-upgrade-907863) DBG |   </ip>
	I0806 00:14:22.496125   61720 main.go:141] libmachine: (kubernetes-upgrade-907863) DBG |   
	I0806 00:14:22.496136   61720 main.go:141] libmachine: (kubernetes-upgrade-907863) DBG | </network>
	I0806 00:14:22.496143   61720 main.go:141] libmachine: (kubernetes-upgrade-907863) DBG | 
	I0806 00:14:22.501515   61720 main.go:141] libmachine: (kubernetes-upgrade-907863) DBG | trying to create private KVM network mk-kubernetes-upgrade-907863 192.168.72.0/24...
	I0806 00:14:22.572148   61720 main.go:141] libmachine: (kubernetes-upgrade-907863) DBG | private KVM network mk-kubernetes-upgrade-907863 192.168.72.0/24 created
	I0806 00:14:22.572178   61720 main.go:141] libmachine: (kubernetes-upgrade-907863) Setting up store path in /home/jenkins/minikube-integration/19373-9606/.minikube/machines/kubernetes-upgrade-907863 ...
	I0806 00:14:22.572193   61720 main.go:141] libmachine: (kubernetes-upgrade-907863) DBG | I0806 00:14:22.572112   61806 common.go:145] Making disk image using store path: /home/jenkins/minikube-integration/19373-9606/.minikube
	I0806 00:14:22.572207   61720 main.go:141] libmachine: (kubernetes-upgrade-907863) Building disk image from file:///home/jenkins/minikube-integration/19373-9606/.minikube/cache/iso/amd64/minikube-v1.33.1-1722248113-19339-amd64.iso
	I0806 00:14:22.572456   61720 main.go:141] libmachine: (kubernetes-upgrade-907863) Downloading /home/jenkins/minikube-integration/19373-9606/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/19373-9606/.minikube/cache/iso/amd64/minikube-v1.33.1-1722248113-19339-amd64.iso...
	I0806 00:14:22.847028   61720 main.go:141] libmachine: (kubernetes-upgrade-907863) DBG | I0806 00:14:22.846888   61806 common.go:152] Creating ssh key: /home/jenkins/minikube-integration/19373-9606/.minikube/machines/kubernetes-upgrade-907863/id_rsa...
	I0806 00:14:23.044080   61720 main.go:141] libmachine: (kubernetes-upgrade-907863) DBG | I0806 00:14:23.043936   61806 common.go:158] Creating raw disk image: /home/jenkins/minikube-integration/19373-9606/.minikube/machines/kubernetes-upgrade-907863/kubernetes-upgrade-907863.rawdisk...
	I0806 00:14:23.044119   61720 main.go:141] libmachine: (kubernetes-upgrade-907863) DBG | Writing magic tar header
	I0806 00:14:23.044138   61720 main.go:141] libmachine: (kubernetes-upgrade-907863) DBG | Writing SSH key tar header
	I0806 00:14:23.044153   61720 main.go:141] libmachine: (kubernetes-upgrade-907863) DBG | I0806 00:14:23.044074   61806 common.go:172] Fixing permissions on /home/jenkins/minikube-integration/19373-9606/.minikube/machines/kubernetes-upgrade-907863 ...
	I0806 00:14:23.044202   61720 main.go:141] libmachine: (kubernetes-upgrade-907863) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19373-9606/.minikube/machines/kubernetes-upgrade-907863
	I0806 00:14:23.044291   61720 main.go:141] libmachine: (kubernetes-upgrade-907863) Setting executable bit set on /home/jenkins/minikube-integration/19373-9606/.minikube/machines/kubernetes-upgrade-907863 (perms=drwx------)
	I0806 00:14:23.044318   61720 main.go:141] libmachine: (kubernetes-upgrade-907863) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19373-9606/.minikube/machines
	I0806 00:14:23.044334   61720 main.go:141] libmachine: (kubernetes-upgrade-907863) Setting executable bit set on /home/jenkins/minikube-integration/19373-9606/.minikube/machines (perms=drwxr-xr-x)
	I0806 00:14:23.044353   61720 main.go:141] libmachine: (kubernetes-upgrade-907863) Setting executable bit set on /home/jenkins/minikube-integration/19373-9606/.minikube (perms=drwxr-xr-x)
	I0806 00:14:23.044366   61720 main.go:141] libmachine: (kubernetes-upgrade-907863) Setting executable bit set on /home/jenkins/minikube-integration/19373-9606 (perms=drwxrwxr-x)
	I0806 00:14:23.044380   61720 main.go:141] libmachine: (kubernetes-upgrade-907863) Setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I0806 00:14:23.044392   61720 main.go:141] libmachine: (kubernetes-upgrade-907863) Setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I0806 00:14:23.044406   61720 main.go:141] libmachine: (kubernetes-upgrade-907863) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19373-9606/.minikube
	I0806 00:14:23.044417   61720 main.go:141] libmachine: (kubernetes-upgrade-907863) Creating domain...
	I0806 00:14:23.044435   61720 main.go:141] libmachine: (kubernetes-upgrade-907863) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19373-9606
	I0806 00:14:23.044452   61720 main.go:141] libmachine: (kubernetes-upgrade-907863) DBG | Checking permissions on dir: /home/jenkins/minikube-integration
	I0806 00:14:23.044464   61720 main.go:141] libmachine: (kubernetes-upgrade-907863) DBG | Checking permissions on dir: /home/jenkins
	I0806 00:14:23.044476   61720 main.go:141] libmachine: (kubernetes-upgrade-907863) DBG | Checking permissions on dir: /home
	I0806 00:14:23.044485   61720 main.go:141] libmachine: (kubernetes-upgrade-907863) DBG | Skipping /home - not owner
	I0806 00:14:23.045538   61720 main.go:141] libmachine: (kubernetes-upgrade-907863) define libvirt domain using xml: 
	I0806 00:14:23.045558   61720 main.go:141] libmachine: (kubernetes-upgrade-907863) <domain type='kvm'>
	I0806 00:14:23.045566   61720 main.go:141] libmachine: (kubernetes-upgrade-907863)   <name>kubernetes-upgrade-907863</name>
	I0806 00:14:23.045571   61720 main.go:141] libmachine: (kubernetes-upgrade-907863)   <memory unit='MiB'>2200</memory>
	I0806 00:14:23.045577   61720 main.go:141] libmachine: (kubernetes-upgrade-907863)   <vcpu>2</vcpu>
	I0806 00:14:23.045584   61720 main.go:141] libmachine: (kubernetes-upgrade-907863)   <features>
	I0806 00:14:23.045598   61720 main.go:141] libmachine: (kubernetes-upgrade-907863)     <acpi/>
	I0806 00:14:23.045614   61720 main.go:141] libmachine: (kubernetes-upgrade-907863)     <apic/>
	I0806 00:14:23.045622   61720 main.go:141] libmachine: (kubernetes-upgrade-907863)     <pae/>
	I0806 00:14:23.045636   61720 main.go:141] libmachine: (kubernetes-upgrade-907863)     
	I0806 00:14:23.045647   61720 main.go:141] libmachine: (kubernetes-upgrade-907863)   </features>
	I0806 00:14:23.045657   61720 main.go:141] libmachine: (kubernetes-upgrade-907863)   <cpu mode='host-passthrough'>
	I0806 00:14:23.045663   61720 main.go:141] libmachine: (kubernetes-upgrade-907863)   
	I0806 00:14:23.045671   61720 main.go:141] libmachine: (kubernetes-upgrade-907863)   </cpu>
	I0806 00:14:23.045682   61720 main.go:141] libmachine: (kubernetes-upgrade-907863)   <os>
	I0806 00:14:23.045689   61720 main.go:141] libmachine: (kubernetes-upgrade-907863)     <type>hvm</type>
	I0806 00:14:23.045698   61720 main.go:141] libmachine: (kubernetes-upgrade-907863)     <boot dev='cdrom'/>
	I0806 00:14:23.045710   61720 main.go:141] libmachine: (kubernetes-upgrade-907863)     <boot dev='hd'/>
	I0806 00:14:23.045723   61720 main.go:141] libmachine: (kubernetes-upgrade-907863)     <bootmenu enable='no'/>
	I0806 00:14:23.045732   61720 main.go:141] libmachine: (kubernetes-upgrade-907863)   </os>
	I0806 00:14:23.045743   61720 main.go:141] libmachine: (kubernetes-upgrade-907863)   <devices>
	I0806 00:14:23.045754   61720 main.go:141] libmachine: (kubernetes-upgrade-907863)     <disk type='file' device='cdrom'>
	I0806 00:14:23.045775   61720 main.go:141] libmachine: (kubernetes-upgrade-907863)       <source file='/home/jenkins/minikube-integration/19373-9606/.minikube/machines/kubernetes-upgrade-907863/boot2docker.iso'/>
	I0806 00:14:23.045793   61720 main.go:141] libmachine: (kubernetes-upgrade-907863)       <target dev='hdc' bus='scsi'/>
	I0806 00:14:23.045802   61720 main.go:141] libmachine: (kubernetes-upgrade-907863)       <readonly/>
	I0806 00:14:23.045809   61720 main.go:141] libmachine: (kubernetes-upgrade-907863)     </disk>
	I0806 00:14:23.045823   61720 main.go:141] libmachine: (kubernetes-upgrade-907863)     <disk type='file' device='disk'>
	I0806 00:14:23.045835   61720 main.go:141] libmachine: (kubernetes-upgrade-907863)       <driver name='qemu' type='raw' cache='default' io='threads' />
	I0806 00:14:23.045851   61720 main.go:141] libmachine: (kubernetes-upgrade-907863)       <source file='/home/jenkins/minikube-integration/19373-9606/.minikube/machines/kubernetes-upgrade-907863/kubernetes-upgrade-907863.rawdisk'/>
	I0806 00:14:23.045866   61720 main.go:141] libmachine: (kubernetes-upgrade-907863)       <target dev='hda' bus='virtio'/>
	I0806 00:14:23.045879   61720 main.go:141] libmachine: (kubernetes-upgrade-907863)     </disk>
	I0806 00:14:23.045890   61720 main.go:141] libmachine: (kubernetes-upgrade-907863)     <interface type='network'>
	I0806 00:14:23.045903   61720 main.go:141] libmachine: (kubernetes-upgrade-907863)       <source network='mk-kubernetes-upgrade-907863'/>
	I0806 00:14:23.045911   61720 main.go:141] libmachine: (kubernetes-upgrade-907863)       <model type='virtio'/>
	I0806 00:14:23.045922   61720 main.go:141] libmachine: (kubernetes-upgrade-907863)     </interface>
	I0806 00:14:23.045935   61720 main.go:141] libmachine: (kubernetes-upgrade-907863)     <interface type='network'>
	I0806 00:14:23.045952   61720 main.go:141] libmachine: (kubernetes-upgrade-907863)       <source network='default'/>
	I0806 00:14:23.045964   61720 main.go:141] libmachine: (kubernetes-upgrade-907863)       <model type='virtio'/>
	I0806 00:14:23.045977   61720 main.go:141] libmachine: (kubernetes-upgrade-907863)     </interface>
	I0806 00:14:23.045987   61720 main.go:141] libmachine: (kubernetes-upgrade-907863)     <serial type='pty'>
	I0806 00:14:23.046017   61720 main.go:141] libmachine: (kubernetes-upgrade-907863)       <target port='0'/>
	I0806 00:14:23.046037   61720 main.go:141] libmachine: (kubernetes-upgrade-907863)     </serial>
	I0806 00:14:23.046059   61720 main.go:141] libmachine: (kubernetes-upgrade-907863)     <console type='pty'>
	I0806 00:14:23.046082   61720 main.go:141] libmachine: (kubernetes-upgrade-907863)       <target type='serial' port='0'/>
	I0806 00:14:23.046093   61720 main.go:141] libmachine: (kubernetes-upgrade-907863)     </console>
	I0806 00:14:23.046100   61720 main.go:141] libmachine: (kubernetes-upgrade-907863)     <rng model='virtio'>
	I0806 00:14:23.046107   61720 main.go:141] libmachine: (kubernetes-upgrade-907863)       <backend model='random'>/dev/random</backend>
	I0806 00:14:23.046112   61720 main.go:141] libmachine: (kubernetes-upgrade-907863)     </rng>
	I0806 00:14:23.046117   61720 main.go:141] libmachine: (kubernetes-upgrade-907863)     
	I0806 00:14:23.046122   61720 main.go:141] libmachine: (kubernetes-upgrade-907863)     
	I0806 00:14:23.046129   61720 main.go:141] libmachine: (kubernetes-upgrade-907863)   </devices>
	I0806 00:14:23.046146   61720 main.go:141] libmachine: (kubernetes-upgrade-907863) </domain>
	I0806 00:14:23.046158   61720 main.go:141] libmachine: (kubernetes-upgrade-907863) 
	I0806 00:14:23.051232   61720 main.go:141] libmachine: (kubernetes-upgrade-907863) DBG | domain kubernetes-upgrade-907863 has defined MAC address 52:54:00:58:67:f9 in network default
	I0806 00:14:23.051853   61720 main.go:141] libmachine: (kubernetes-upgrade-907863) Ensuring networks are active...
	I0806 00:14:23.051881   61720 main.go:141] libmachine: (kubernetes-upgrade-907863) DBG | domain kubernetes-upgrade-907863 has defined MAC address 52:54:00:f6:6f:99 in network mk-kubernetes-upgrade-907863
	I0806 00:14:23.052731   61720 main.go:141] libmachine: (kubernetes-upgrade-907863) Ensuring network default is active
	I0806 00:14:23.053090   61720 main.go:141] libmachine: (kubernetes-upgrade-907863) Ensuring network mk-kubernetes-upgrade-907863 is active
	I0806 00:14:23.053709   61720 main.go:141] libmachine: (kubernetes-upgrade-907863) Getting domain xml...
	I0806 00:14:23.054545   61720 main.go:141] libmachine: (kubernetes-upgrade-907863) Creating domain...
	I0806 00:14:24.449226   61720 main.go:141] libmachine: (kubernetes-upgrade-907863) Waiting to get IP...
	I0806 00:14:24.450356   61720 main.go:141] libmachine: (kubernetes-upgrade-907863) DBG | domain kubernetes-upgrade-907863 has defined MAC address 52:54:00:f6:6f:99 in network mk-kubernetes-upgrade-907863
	I0806 00:14:24.450859   61720 main.go:141] libmachine: (kubernetes-upgrade-907863) DBG | unable to find current IP address of domain kubernetes-upgrade-907863 in network mk-kubernetes-upgrade-907863
	I0806 00:14:24.450914   61720 main.go:141] libmachine: (kubernetes-upgrade-907863) DBG | I0806 00:14:24.450837   61806 retry.go:31] will retry after 280.73133ms: waiting for machine to come up
	I0806 00:14:24.733540   61720 main.go:141] libmachine: (kubernetes-upgrade-907863) DBG | domain kubernetes-upgrade-907863 has defined MAC address 52:54:00:f6:6f:99 in network mk-kubernetes-upgrade-907863
	I0806 00:14:24.734077   61720 main.go:141] libmachine: (kubernetes-upgrade-907863) DBG | unable to find current IP address of domain kubernetes-upgrade-907863 in network mk-kubernetes-upgrade-907863
	I0806 00:14:24.734131   61720 main.go:141] libmachine: (kubernetes-upgrade-907863) DBG | I0806 00:14:24.734045   61806 retry.go:31] will retry after 333.904526ms: waiting for machine to come up
	I0806 00:14:25.069765   61720 main.go:141] libmachine: (kubernetes-upgrade-907863) DBG | domain kubernetes-upgrade-907863 has defined MAC address 52:54:00:f6:6f:99 in network mk-kubernetes-upgrade-907863
	I0806 00:14:25.070439   61720 main.go:141] libmachine: (kubernetes-upgrade-907863) DBG | unable to find current IP address of domain kubernetes-upgrade-907863 in network mk-kubernetes-upgrade-907863
	I0806 00:14:25.070463   61720 main.go:141] libmachine: (kubernetes-upgrade-907863) DBG | I0806 00:14:25.070391   61806 retry.go:31] will retry after 345.77813ms: waiting for machine to come up
	I0806 00:14:25.418105   61720 main.go:141] libmachine: (kubernetes-upgrade-907863) DBG | domain kubernetes-upgrade-907863 has defined MAC address 52:54:00:f6:6f:99 in network mk-kubernetes-upgrade-907863
	I0806 00:14:25.418723   61720 main.go:141] libmachine: (kubernetes-upgrade-907863) DBG | unable to find current IP address of domain kubernetes-upgrade-907863 in network mk-kubernetes-upgrade-907863
	I0806 00:14:25.418765   61720 main.go:141] libmachine: (kubernetes-upgrade-907863) DBG | I0806 00:14:25.418699   61806 retry.go:31] will retry after 582.827729ms: waiting for machine to come up
	I0806 00:14:26.003555   61720 main.go:141] libmachine: (kubernetes-upgrade-907863) DBG | domain kubernetes-upgrade-907863 has defined MAC address 52:54:00:f6:6f:99 in network mk-kubernetes-upgrade-907863
	I0806 00:14:26.004145   61720 main.go:141] libmachine: (kubernetes-upgrade-907863) DBG | unable to find current IP address of domain kubernetes-upgrade-907863 in network mk-kubernetes-upgrade-907863
	I0806 00:14:26.004175   61720 main.go:141] libmachine: (kubernetes-upgrade-907863) DBG | I0806 00:14:26.004076   61806 retry.go:31] will retry after 667.365479ms: waiting for machine to come up
	I0806 00:14:26.672749   61720 main.go:141] libmachine: (kubernetes-upgrade-907863) DBG | domain kubernetes-upgrade-907863 has defined MAC address 52:54:00:f6:6f:99 in network mk-kubernetes-upgrade-907863
	I0806 00:14:26.673319   61720 main.go:141] libmachine: (kubernetes-upgrade-907863) DBG | unable to find current IP address of domain kubernetes-upgrade-907863 in network mk-kubernetes-upgrade-907863
	I0806 00:14:26.673367   61720 main.go:141] libmachine: (kubernetes-upgrade-907863) DBG | I0806 00:14:26.673270   61806 retry.go:31] will retry after 827.777507ms: waiting for machine to come up
	I0806 00:14:27.503445   61720 main.go:141] libmachine: (kubernetes-upgrade-907863) DBG | domain kubernetes-upgrade-907863 has defined MAC address 52:54:00:f6:6f:99 in network mk-kubernetes-upgrade-907863
	I0806 00:14:27.504082   61720 main.go:141] libmachine: (kubernetes-upgrade-907863) DBG | unable to find current IP address of domain kubernetes-upgrade-907863 in network mk-kubernetes-upgrade-907863
	I0806 00:14:27.504105   61720 main.go:141] libmachine: (kubernetes-upgrade-907863) DBG | I0806 00:14:27.504028   61806 retry.go:31] will retry after 1.101635274s: waiting for machine to come up
	I0806 00:14:28.607039   61720 main.go:141] libmachine: (kubernetes-upgrade-907863) DBG | domain kubernetes-upgrade-907863 has defined MAC address 52:54:00:f6:6f:99 in network mk-kubernetes-upgrade-907863
	I0806 00:14:28.607486   61720 main.go:141] libmachine: (kubernetes-upgrade-907863) DBG | unable to find current IP address of domain kubernetes-upgrade-907863 in network mk-kubernetes-upgrade-907863
	I0806 00:14:28.607511   61720 main.go:141] libmachine: (kubernetes-upgrade-907863) DBG | I0806 00:14:28.607459   61806 retry.go:31] will retry after 1.159023791s: waiting for machine to come up
	I0806 00:14:29.768206   61720 main.go:141] libmachine: (kubernetes-upgrade-907863) DBG | domain kubernetes-upgrade-907863 has defined MAC address 52:54:00:f6:6f:99 in network mk-kubernetes-upgrade-907863
	I0806 00:14:29.768812   61720 main.go:141] libmachine: (kubernetes-upgrade-907863) DBG | unable to find current IP address of domain kubernetes-upgrade-907863 in network mk-kubernetes-upgrade-907863
	I0806 00:14:29.768843   61720 main.go:141] libmachine: (kubernetes-upgrade-907863) DBG | I0806 00:14:29.768750   61806 retry.go:31] will retry after 1.740886587s: waiting for machine to come up
	I0806 00:14:31.511246   61720 main.go:141] libmachine: (kubernetes-upgrade-907863) DBG | domain kubernetes-upgrade-907863 has defined MAC address 52:54:00:f6:6f:99 in network mk-kubernetes-upgrade-907863
	I0806 00:14:31.511719   61720 main.go:141] libmachine: (kubernetes-upgrade-907863) DBG | unable to find current IP address of domain kubernetes-upgrade-907863 in network mk-kubernetes-upgrade-907863
	I0806 00:14:31.511738   61720 main.go:141] libmachine: (kubernetes-upgrade-907863) DBG | I0806 00:14:31.511701   61806 retry.go:31] will retry after 1.403466714s: waiting for machine to come up
	I0806 00:14:32.917307   61720 main.go:141] libmachine: (kubernetes-upgrade-907863) DBG | domain kubernetes-upgrade-907863 has defined MAC address 52:54:00:f6:6f:99 in network mk-kubernetes-upgrade-907863
	I0806 00:14:32.917815   61720 main.go:141] libmachine: (kubernetes-upgrade-907863) DBG | unable to find current IP address of domain kubernetes-upgrade-907863 in network mk-kubernetes-upgrade-907863
	I0806 00:14:32.917877   61720 main.go:141] libmachine: (kubernetes-upgrade-907863) DBG | I0806 00:14:32.917776   61806 retry.go:31] will retry after 1.839364761s: waiting for machine to come up
	I0806 00:14:34.760061   61720 main.go:141] libmachine: (kubernetes-upgrade-907863) DBG | domain kubernetes-upgrade-907863 has defined MAC address 52:54:00:f6:6f:99 in network mk-kubernetes-upgrade-907863
	I0806 00:14:34.760863   61720 main.go:141] libmachine: (kubernetes-upgrade-907863) DBG | unable to find current IP address of domain kubernetes-upgrade-907863 in network mk-kubernetes-upgrade-907863
	I0806 00:14:34.760891   61720 main.go:141] libmachine: (kubernetes-upgrade-907863) DBG | I0806 00:14:34.760777   61806 retry.go:31] will retry after 2.97914155s: waiting for machine to come up
	I0806 00:14:37.741716   61720 main.go:141] libmachine: (kubernetes-upgrade-907863) DBG | domain kubernetes-upgrade-907863 has defined MAC address 52:54:00:f6:6f:99 in network mk-kubernetes-upgrade-907863
	I0806 00:14:37.742359   61720 main.go:141] libmachine: (kubernetes-upgrade-907863) DBG | unable to find current IP address of domain kubernetes-upgrade-907863 in network mk-kubernetes-upgrade-907863
	I0806 00:14:37.742385   61720 main.go:141] libmachine: (kubernetes-upgrade-907863) DBG | I0806 00:14:37.742309   61806 retry.go:31] will retry after 3.434033982s: waiting for machine to come up
	I0806 00:14:41.180419   61720 main.go:141] libmachine: (kubernetes-upgrade-907863) DBG | domain kubernetes-upgrade-907863 has defined MAC address 52:54:00:f6:6f:99 in network mk-kubernetes-upgrade-907863
	I0806 00:14:41.180838   61720 main.go:141] libmachine: (kubernetes-upgrade-907863) DBG | unable to find current IP address of domain kubernetes-upgrade-907863 in network mk-kubernetes-upgrade-907863
	I0806 00:14:41.180864   61720 main.go:141] libmachine: (kubernetes-upgrade-907863) DBG | I0806 00:14:41.180791   61806 retry.go:31] will retry after 3.670615945s: waiting for machine to come up
	I0806 00:14:44.852616   61720 main.go:141] libmachine: (kubernetes-upgrade-907863) DBG | domain kubernetes-upgrade-907863 has defined MAC address 52:54:00:f6:6f:99 in network mk-kubernetes-upgrade-907863
	I0806 00:14:44.853078   61720 main.go:141] libmachine: (kubernetes-upgrade-907863) Found IP for machine: 192.168.72.112
	I0806 00:14:44.853113   61720 main.go:141] libmachine: (kubernetes-upgrade-907863) DBG | domain kubernetes-upgrade-907863 has current primary IP address 192.168.72.112 and MAC address 52:54:00:f6:6f:99 in network mk-kubernetes-upgrade-907863
	I0806 00:14:44.853123   61720 main.go:141] libmachine: (kubernetes-upgrade-907863) Reserving static IP address...
	I0806 00:14:44.853598   61720 main.go:141] libmachine: (kubernetes-upgrade-907863) DBG | unable to find host DHCP lease matching {name: "kubernetes-upgrade-907863", mac: "52:54:00:f6:6f:99", ip: "192.168.72.112"} in network mk-kubernetes-upgrade-907863
	I0806 00:14:44.937213   61720 main.go:141] libmachine: (kubernetes-upgrade-907863) DBG | Getting to WaitForSSH function...
	I0806 00:14:44.937245   61720 main.go:141] libmachine: (kubernetes-upgrade-907863) Reserved static IP address: 192.168.72.112
	I0806 00:14:44.937261   61720 main.go:141] libmachine: (kubernetes-upgrade-907863) Waiting for SSH to be available...
	I0806 00:14:44.940137   61720 main.go:141] libmachine: (kubernetes-upgrade-907863) DBG | domain kubernetes-upgrade-907863 has defined MAC address 52:54:00:f6:6f:99 in network mk-kubernetes-upgrade-907863
	I0806 00:14:44.940632   61720 main.go:141] libmachine: (kubernetes-upgrade-907863) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f6:6f:99", ip: ""} in network mk-kubernetes-upgrade-907863: {Iface:virbr4 ExpiryTime:2024-08-06 01:14:38 +0000 UTC Type:0 Mac:52:54:00:f6:6f:99 Iaid: IPaddr:192.168.72.112 Prefix:24 Hostname:minikube Clientid:01:52:54:00:f6:6f:99}
	I0806 00:14:44.940673   61720 main.go:141] libmachine: (kubernetes-upgrade-907863) DBG | domain kubernetes-upgrade-907863 has defined IP address 192.168.72.112 and MAC address 52:54:00:f6:6f:99 in network mk-kubernetes-upgrade-907863
	I0806 00:14:44.940801   61720 main.go:141] libmachine: (kubernetes-upgrade-907863) DBG | Using SSH client type: external
	I0806 00:14:44.940825   61720 main.go:141] libmachine: (kubernetes-upgrade-907863) DBG | Using SSH private key: /home/jenkins/minikube-integration/19373-9606/.minikube/machines/kubernetes-upgrade-907863/id_rsa (-rw-------)
	I0806 00:14:44.940862   61720 main.go:141] libmachine: (kubernetes-upgrade-907863) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.72.112 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19373-9606/.minikube/machines/kubernetes-upgrade-907863/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0806 00:14:44.940880   61720 main.go:141] libmachine: (kubernetes-upgrade-907863) DBG | About to run SSH command:
	I0806 00:14:44.940892   61720 main.go:141] libmachine: (kubernetes-upgrade-907863) DBG | exit 0
	I0806 00:14:45.063499   61720 main.go:141] libmachine: (kubernetes-upgrade-907863) DBG | SSH cmd err, output: <nil>: 
	I0806 00:14:45.063807   61720 main.go:141] libmachine: (kubernetes-upgrade-907863) KVM machine creation complete!
	I0806 00:14:45.064161   61720 main.go:141] libmachine: (kubernetes-upgrade-907863) Calling .GetConfigRaw
	I0806 00:14:45.064785   61720 main.go:141] libmachine: (kubernetes-upgrade-907863) Calling .DriverName
	I0806 00:14:45.064974   61720 main.go:141] libmachine: (kubernetes-upgrade-907863) Calling .DriverName
	I0806 00:14:45.065103   61720 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I0806 00:14:45.065125   61720 main.go:141] libmachine: (kubernetes-upgrade-907863) Calling .GetState
	I0806 00:14:45.066700   61720 main.go:141] libmachine: Detecting operating system of created instance...
	I0806 00:14:45.066716   61720 main.go:141] libmachine: Waiting for SSH to be available...
	I0806 00:14:45.066724   61720 main.go:141] libmachine: Getting to WaitForSSH function...
	I0806 00:14:45.066732   61720 main.go:141] libmachine: (kubernetes-upgrade-907863) Calling .GetSSHHostname
	I0806 00:14:45.069985   61720 main.go:141] libmachine: (kubernetes-upgrade-907863) DBG | domain kubernetes-upgrade-907863 has defined MAC address 52:54:00:f6:6f:99 in network mk-kubernetes-upgrade-907863
	I0806 00:14:45.070424   61720 main.go:141] libmachine: (kubernetes-upgrade-907863) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f6:6f:99", ip: ""} in network mk-kubernetes-upgrade-907863: {Iface:virbr4 ExpiryTime:2024-08-06 01:14:38 +0000 UTC Type:0 Mac:52:54:00:f6:6f:99 Iaid: IPaddr:192.168.72.112 Prefix:24 Hostname:kubernetes-upgrade-907863 Clientid:01:52:54:00:f6:6f:99}
	I0806 00:14:45.070453   61720 main.go:141] libmachine: (kubernetes-upgrade-907863) DBG | domain kubernetes-upgrade-907863 has defined IP address 192.168.72.112 and MAC address 52:54:00:f6:6f:99 in network mk-kubernetes-upgrade-907863
	I0806 00:14:45.070630   61720 main.go:141] libmachine: (kubernetes-upgrade-907863) Calling .GetSSHPort
	I0806 00:14:45.070807   61720 main.go:141] libmachine: (kubernetes-upgrade-907863) Calling .GetSSHKeyPath
	I0806 00:14:45.071003   61720 main.go:141] libmachine: (kubernetes-upgrade-907863) Calling .GetSSHKeyPath
	I0806 00:14:45.071149   61720 main.go:141] libmachine: (kubernetes-upgrade-907863) Calling .GetSSHUsername
	I0806 00:14:45.071287   61720 main.go:141] libmachine: Using SSH client type: native
	I0806 00:14:45.071475   61720 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.72.112 22 <nil> <nil>}
	I0806 00:14:45.071492   61720 main.go:141] libmachine: About to run SSH command:
	exit 0
	I0806 00:14:45.178702   61720 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0806 00:14:45.178751   61720 main.go:141] libmachine: Detecting the provisioner...
	I0806 00:14:45.178767   61720 main.go:141] libmachine: (kubernetes-upgrade-907863) Calling .GetSSHHostname
	I0806 00:14:45.182067   61720 main.go:141] libmachine: (kubernetes-upgrade-907863) DBG | domain kubernetes-upgrade-907863 has defined MAC address 52:54:00:f6:6f:99 in network mk-kubernetes-upgrade-907863
	I0806 00:14:45.182470   61720 main.go:141] libmachine: (kubernetes-upgrade-907863) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f6:6f:99", ip: ""} in network mk-kubernetes-upgrade-907863: {Iface:virbr4 ExpiryTime:2024-08-06 01:14:38 +0000 UTC Type:0 Mac:52:54:00:f6:6f:99 Iaid: IPaddr:192.168.72.112 Prefix:24 Hostname:kubernetes-upgrade-907863 Clientid:01:52:54:00:f6:6f:99}
	I0806 00:14:45.182517   61720 main.go:141] libmachine: (kubernetes-upgrade-907863) DBG | domain kubernetes-upgrade-907863 has defined IP address 192.168.72.112 and MAC address 52:54:00:f6:6f:99 in network mk-kubernetes-upgrade-907863
	I0806 00:14:45.182630   61720 main.go:141] libmachine: (kubernetes-upgrade-907863) Calling .GetSSHPort
	I0806 00:14:45.182863   61720 main.go:141] libmachine: (kubernetes-upgrade-907863) Calling .GetSSHKeyPath
	I0806 00:14:45.183077   61720 main.go:141] libmachine: (kubernetes-upgrade-907863) Calling .GetSSHKeyPath
	I0806 00:14:45.183250   61720 main.go:141] libmachine: (kubernetes-upgrade-907863) Calling .GetSSHUsername
	I0806 00:14:45.183416   61720 main.go:141] libmachine: Using SSH client type: native
	I0806 00:14:45.183625   61720 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.72.112 22 <nil> <nil>}
	I0806 00:14:45.183636   61720 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I0806 00:14:45.283887   61720 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2023.02.9-dirty
	ID=buildroot
	VERSION_ID=2023.02.9
	PRETTY_NAME="Buildroot 2023.02.9"
	
	I0806 00:14:45.283956   61720 main.go:141] libmachine: found compatible host: buildroot
	I0806 00:14:45.283966   61720 main.go:141] libmachine: Provisioning with buildroot...
	I0806 00:14:45.283978   61720 main.go:141] libmachine: (kubernetes-upgrade-907863) Calling .GetMachineName
	I0806 00:14:45.284240   61720 buildroot.go:166] provisioning hostname "kubernetes-upgrade-907863"
	I0806 00:14:45.284270   61720 main.go:141] libmachine: (kubernetes-upgrade-907863) Calling .GetMachineName
	I0806 00:14:45.284472   61720 main.go:141] libmachine: (kubernetes-upgrade-907863) Calling .GetSSHHostname
	I0806 00:14:45.287574   61720 main.go:141] libmachine: (kubernetes-upgrade-907863) DBG | domain kubernetes-upgrade-907863 has defined MAC address 52:54:00:f6:6f:99 in network mk-kubernetes-upgrade-907863
	I0806 00:14:45.287912   61720 main.go:141] libmachine: (kubernetes-upgrade-907863) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f6:6f:99", ip: ""} in network mk-kubernetes-upgrade-907863: {Iface:virbr4 ExpiryTime:2024-08-06 01:14:38 +0000 UTC Type:0 Mac:52:54:00:f6:6f:99 Iaid: IPaddr:192.168.72.112 Prefix:24 Hostname:kubernetes-upgrade-907863 Clientid:01:52:54:00:f6:6f:99}
	I0806 00:14:45.287955   61720 main.go:141] libmachine: (kubernetes-upgrade-907863) DBG | domain kubernetes-upgrade-907863 has defined IP address 192.168.72.112 and MAC address 52:54:00:f6:6f:99 in network mk-kubernetes-upgrade-907863
	I0806 00:14:45.288147   61720 main.go:141] libmachine: (kubernetes-upgrade-907863) Calling .GetSSHPort
	I0806 00:14:45.288338   61720 main.go:141] libmachine: (kubernetes-upgrade-907863) Calling .GetSSHKeyPath
	I0806 00:14:45.288509   61720 main.go:141] libmachine: (kubernetes-upgrade-907863) Calling .GetSSHKeyPath
	I0806 00:14:45.288713   61720 main.go:141] libmachine: (kubernetes-upgrade-907863) Calling .GetSSHUsername
	I0806 00:14:45.288922   61720 main.go:141] libmachine: Using SSH client type: native
	I0806 00:14:45.289156   61720 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.72.112 22 <nil> <nil>}
	I0806 00:14:45.289167   61720 main.go:141] libmachine: About to run SSH command:
	sudo hostname kubernetes-upgrade-907863 && echo "kubernetes-upgrade-907863" | sudo tee /etc/hostname
	I0806 00:14:45.413866   61720 main.go:141] libmachine: SSH cmd err, output: <nil>: kubernetes-upgrade-907863
	
	I0806 00:14:45.413901   61720 main.go:141] libmachine: (kubernetes-upgrade-907863) Calling .GetSSHHostname
	I0806 00:14:45.417554   61720 main.go:141] libmachine: (kubernetes-upgrade-907863) DBG | domain kubernetes-upgrade-907863 has defined MAC address 52:54:00:f6:6f:99 in network mk-kubernetes-upgrade-907863
	I0806 00:14:45.418009   61720 main.go:141] libmachine: (kubernetes-upgrade-907863) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f6:6f:99", ip: ""} in network mk-kubernetes-upgrade-907863: {Iface:virbr4 ExpiryTime:2024-08-06 01:14:38 +0000 UTC Type:0 Mac:52:54:00:f6:6f:99 Iaid: IPaddr:192.168.72.112 Prefix:24 Hostname:kubernetes-upgrade-907863 Clientid:01:52:54:00:f6:6f:99}
	I0806 00:14:45.418043   61720 main.go:141] libmachine: (kubernetes-upgrade-907863) DBG | domain kubernetes-upgrade-907863 has defined IP address 192.168.72.112 and MAC address 52:54:00:f6:6f:99 in network mk-kubernetes-upgrade-907863
	I0806 00:14:45.418153   61720 main.go:141] libmachine: (kubernetes-upgrade-907863) Calling .GetSSHPort
	I0806 00:14:45.418331   61720 main.go:141] libmachine: (kubernetes-upgrade-907863) Calling .GetSSHKeyPath
	I0806 00:14:45.418573   61720 main.go:141] libmachine: (kubernetes-upgrade-907863) Calling .GetSSHKeyPath
	I0806 00:14:45.418717   61720 main.go:141] libmachine: (kubernetes-upgrade-907863) Calling .GetSSHUsername
	I0806 00:14:45.418894   61720 main.go:141] libmachine: Using SSH client type: native
	I0806 00:14:45.419083   61720 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.72.112 22 <nil> <nil>}
	I0806 00:14:45.419103   61720 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\skubernetes-upgrade-907863' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 kubernetes-upgrade-907863/g' /etc/hosts;
				else 
					echo '127.0.1.1 kubernetes-upgrade-907863' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0806 00:14:45.530368   61720 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0806 00:14:45.530403   61720 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19373-9606/.minikube CaCertPath:/home/jenkins/minikube-integration/19373-9606/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19373-9606/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19373-9606/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19373-9606/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19373-9606/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19373-9606/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19373-9606/.minikube}
	I0806 00:14:45.530459   61720 buildroot.go:174] setting up certificates
	I0806 00:14:45.530478   61720 provision.go:84] configureAuth start
	I0806 00:14:45.530497   61720 main.go:141] libmachine: (kubernetes-upgrade-907863) Calling .GetMachineName
	I0806 00:14:45.530793   61720 main.go:141] libmachine: (kubernetes-upgrade-907863) Calling .GetIP
	I0806 00:14:45.533849   61720 main.go:141] libmachine: (kubernetes-upgrade-907863) DBG | domain kubernetes-upgrade-907863 has defined MAC address 52:54:00:f6:6f:99 in network mk-kubernetes-upgrade-907863
	I0806 00:14:45.534237   61720 main.go:141] libmachine: (kubernetes-upgrade-907863) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f6:6f:99", ip: ""} in network mk-kubernetes-upgrade-907863: {Iface:virbr4 ExpiryTime:2024-08-06 01:14:38 +0000 UTC Type:0 Mac:52:54:00:f6:6f:99 Iaid: IPaddr:192.168.72.112 Prefix:24 Hostname:kubernetes-upgrade-907863 Clientid:01:52:54:00:f6:6f:99}
	I0806 00:14:45.534262   61720 main.go:141] libmachine: (kubernetes-upgrade-907863) DBG | domain kubernetes-upgrade-907863 has defined IP address 192.168.72.112 and MAC address 52:54:00:f6:6f:99 in network mk-kubernetes-upgrade-907863
	I0806 00:14:45.534404   61720 main.go:141] libmachine: (kubernetes-upgrade-907863) Calling .GetSSHHostname
	I0806 00:14:45.536544   61720 main.go:141] libmachine: (kubernetes-upgrade-907863) DBG | domain kubernetes-upgrade-907863 has defined MAC address 52:54:00:f6:6f:99 in network mk-kubernetes-upgrade-907863
	I0806 00:14:45.536851   61720 main.go:141] libmachine: (kubernetes-upgrade-907863) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f6:6f:99", ip: ""} in network mk-kubernetes-upgrade-907863: {Iface:virbr4 ExpiryTime:2024-08-06 01:14:38 +0000 UTC Type:0 Mac:52:54:00:f6:6f:99 Iaid: IPaddr:192.168.72.112 Prefix:24 Hostname:kubernetes-upgrade-907863 Clientid:01:52:54:00:f6:6f:99}
	I0806 00:14:45.536890   61720 main.go:141] libmachine: (kubernetes-upgrade-907863) DBG | domain kubernetes-upgrade-907863 has defined IP address 192.168.72.112 and MAC address 52:54:00:f6:6f:99 in network mk-kubernetes-upgrade-907863
	I0806 00:14:45.537001   61720 provision.go:143] copyHostCerts
	I0806 00:14:45.537066   61720 exec_runner.go:144] found /home/jenkins/minikube-integration/19373-9606/.minikube/key.pem, removing ...
	I0806 00:14:45.537083   61720 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19373-9606/.minikube/key.pem
	I0806 00:14:45.537142   61720 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19373-9606/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19373-9606/.minikube/key.pem (1679 bytes)
	I0806 00:14:45.537260   61720 exec_runner.go:144] found /home/jenkins/minikube-integration/19373-9606/.minikube/ca.pem, removing ...
	I0806 00:14:45.537272   61720 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19373-9606/.minikube/ca.pem
	I0806 00:14:45.537309   61720 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19373-9606/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19373-9606/.minikube/ca.pem (1082 bytes)
	I0806 00:14:45.537395   61720 exec_runner.go:144] found /home/jenkins/minikube-integration/19373-9606/.minikube/cert.pem, removing ...
	I0806 00:14:45.537405   61720 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19373-9606/.minikube/cert.pem
	I0806 00:14:45.537432   61720 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19373-9606/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19373-9606/.minikube/cert.pem (1123 bytes)
	I0806 00:14:45.537496   61720 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19373-9606/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19373-9606/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19373-9606/.minikube/certs/ca-key.pem org=jenkins.kubernetes-upgrade-907863 san=[127.0.0.1 192.168.72.112 kubernetes-upgrade-907863 localhost minikube]
	I0806 00:14:45.648251   61720 provision.go:177] copyRemoteCerts
	I0806 00:14:45.648303   61720 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0806 00:14:45.648333   61720 main.go:141] libmachine: (kubernetes-upgrade-907863) Calling .GetSSHHostname
	I0806 00:14:45.650992   61720 main.go:141] libmachine: (kubernetes-upgrade-907863) DBG | domain kubernetes-upgrade-907863 has defined MAC address 52:54:00:f6:6f:99 in network mk-kubernetes-upgrade-907863
	I0806 00:14:45.651510   61720 main.go:141] libmachine: (kubernetes-upgrade-907863) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f6:6f:99", ip: ""} in network mk-kubernetes-upgrade-907863: {Iface:virbr4 ExpiryTime:2024-08-06 01:14:38 +0000 UTC Type:0 Mac:52:54:00:f6:6f:99 Iaid: IPaddr:192.168.72.112 Prefix:24 Hostname:kubernetes-upgrade-907863 Clientid:01:52:54:00:f6:6f:99}
	I0806 00:14:45.651534   61720 main.go:141] libmachine: (kubernetes-upgrade-907863) DBG | domain kubernetes-upgrade-907863 has defined IP address 192.168.72.112 and MAC address 52:54:00:f6:6f:99 in network mk-kubernetes-upgrade-907863
	I0806 00:14:45.651720   61720 main.go:141] libmachine: (kubernetes-upgrade-907863) Calling .GetSSHPort
	I0806 00:14:45.651912   61720 main.go:141] libmachine: (kubernetes-upgrade-907863) Calling .GetSSHKeyPath
	I0806 00:14:45.652105   61720 main.go:141] libmachine: (kubernetes-upgrade-907863) Calling .GetSSHUsername
	I0806 00:14:45.652257   61720 sshutil.go:53] new ssh client: &{IP:192.168.72.112 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19373-9606/.minikube/machines/kubernetes-upgrade-907863/id_rsa Username:docker}
	I0806 00:14:45.733623   61720 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19373-9606/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0806 00:14:45.759216   61720 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19373-9606/.minikube/machines/server.pem --> /etc/docker/server.pem (1241 bytes)
	I0806 00:14:45.785907   61720 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19373-9606/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0806 00:14:45.812282   61720 provision.go:87] duration metric: took 281.788709ms to configureAuth
	I0806 00:14:45.812310   61720 buildroot.go:189] setting minikube options for container-runtime
	I0806 00:14:45.812466   61720 config.go:182] Loaded profile config "kubernetes-upgrade-907863": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.20.0
	I0806 00:14:45.812527   61720 main.go:141] libmachine: (kubernetes-upgrade-907863) Calling .GetSSHHostname
	I0806 00:14:45.815951   61720 main.go:141] libmachine: (kubernetes-upgrade-907863) DBG | domain kubernetes-upgrade-907863 has defined MAC address 52:54:00:f6:6f:99 in network mk-kubernetes-upgrade-907863
	I0806 00:14:45.816375   61720 main.go:141] libmachine: (kubernetes-upgrade-907863) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f6:6f:99", ip: ""} in network mk-kubernetes-upgrade-907863: {Iface:virbr4 ExpiryTime:2024-08-06 01:14:38 +0000 UTC Type:0 Mac:52:54:00:f6:6f:99 Iaid: IPaddr:192.168.72.112 Prefix:24 Hostname:kubernetes-upgrade-907863 Clientid:01:52:54:00:f6:6f:99}
	I0806 00:14:45.816401   61720 main.go:141] libmachine: (kubernetes-upgrade-907863) DBG | domain kubernetes-upgrade-907863 has defined IP address 192.168.72.112 and MAC address 52:54:00:f6:6f:99 in network mk-kubernetes-upgrade-907863
	I0806 00:14:45.816598   61720 main.go:141] libmachine: (kubernetes-upgrade-907863) Calling .GetSSHPort
	I0806 00:14:45.816826   61720 main.go:141] libmachine: (kubernetes-upgrade-907863) Calling .GetSSHKeyPath
	I0806 00:14:45.816995   61720 main.go:141] libmachine: (kubernetes-upgrade-907863) Calling .GetSSHKeyPath
	I0806 00:14:45.817171   61720 main.go:141] libmachine: (kubernetes-upgrade-907863) Calling .GetSSHUsername
	I0806 00:14:45.817360   61720 main.go:141] libmachine: Using SSH client type: native
	I0806 00:14:45.817605   61720 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.72.112 22 <nil> <nil>}
	I0806 00:14:45.817633   61720 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0806 00:14:46.096742   61720 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0806 00:14:46.096779   61720 main.go:141] libmachine: Checking connection to Docker...
	I0806 00:14:46.096793   61720 main.go:141] libmachine: (kubernetes-upgrade-907863) Calling .GetURL
	I0806 00:14:46.098348   61720 main.go:141] libmachine: (kubernetes-upgrade-907863) DBG | Using libvirt version 6000000
	I0806 00:14:46.100964   61720 main.go:141] libmachine: (kubernetes-upgrade-907863) DBG | domain kubernetes-upgrade-907863 has defined MAC address 52:54:00:f6:6f:99 in network mk-kubernetes-upgrade-907863
	I0806 00:14:46.101255   61720 main.go:141] libmachine: (kubernetes-upgrade-907863) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f6:6f:99", ip: ""} in network mk-kubernetes-upgrade-907863: {Iface:virbr4 ExpiryTime:2024-08-06 01:14:38 +0000 UTC Type:0 Mac:52:54:00:f6:6f:99 Iaid: IPaddr:192.168.72.112 Prefix:24 Hostname:kubernetes-upgrade-907863 Clientid:01:52:54:00:f6:6f:99}
	I0806 00:14:46.101277   61720 main.go:141] libmachine: (kubernetes-upgrade-907863) DBG | domain kubernetes-upgrade-907863 has defined IP address 192.168.72.112 and MAC address 52:54:00:f6:6f:99 in network mk-kubernetes-upgrade-907863
	I0806 00:14:46.101439   61720 main.go:141] libmachine: Docker is up and running!
	I0806 00:14:46.101449   61720 main.go:141] libmachine: Reticulating splines...
	I0806 00:14:46.101457   61720 client.go:171] duration metric: took 23.613079714s to LocalClient.Create
	I0806 00:14:46.101483   61720 start.go:167] duration metric: took 23.613147049s to libmachine.API.Create "kubernetes-upgrade-907863"
	I0806 00:14:46.101494   61720 start.go:293] postStartSetup for "kubernetes-upgrade-907863" (driver="kvm2")
	I0806 00:14:46.101508   61720 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0806 00:14:46.101531   61720 main.go:141] libmachine: (kubernetes-upgrade-907863) Calling .DriverName
	I0806 00:14:46.101781   61720 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0806 00:14:46.101829   61720 main.go:141] libmachine: (kubernetes-upgrade-907863) Calling .GetSSHHostname
	I0806 00:14:46.104347   61720 main.go:141] libmachine: (kubernetes-upgrade-907863) DBG | domain kubernetes-upgrade-907863 has defined MAC address 52:54:00:f6:6f:99 in network mk-kubernetes-upgrade-907863
	I0806 00:14:46.104786   61720 main.go:141] libmachine: (kubernetes-upgrade-907863) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f6:6f:99", ip: ""} in network mk-kubernetes-upgrade-907863: {Iface:virbr4 ExpiryTime:2024-08-06 01:14:38 +0000 UTC Type:0 Mac:52:54:00:f6:6f:99 Iaid: IPaddr:192.168.72.112 Prefix:24 Hostname:kubernetes-upgrade-907863 Clientid:01:52:54:00:f6:6f:99}
	I0806 00:14:46.104813   61720 main.go:141] libmachine: (kubernetes-upgrade-907863) DBG | domain kubernetes-upgrade-907863 has defined IP address 192.168.72.112 and MAC address 52:54:00:f6:6f:99 in network mk-kubernetes-upgrade-907863
	I0806 00:14:46.105081   61720 main.go:141] libmachine: (kubernetes-upgrade-907863) Calling .GetSSHPort
	I0806 00:14:46.105257   61720 main.go:141] libmachine: (kubernetes-upgrade-907863) Calling .GetSSHKeyPath
	I0806 00:14:46.105445   61720 main.go:141] libmachine: (kubernetes-upgrade-907863) Calling .GetSSHUsername
	I0806 00:14:46.105604   61720 sshutil.go:53] new ssh client: &{IP:192.168.72.112 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19373-9606/.minikube/machines/kubernetes-upgrade-907863/id_rsa Username:docker}
	I0806 00:14:46.188914   61720 ssh_runner.go:195] Run: cat /etc/os-release
	I0806 00:14:46.193808   61720 info.go:137] Remote host: Buildroot 2023.02.9
	I0806 00:14:46.193837   61720 filesync.go:126] Scanning /home/jenkins/minikube-integration/19373-9606/.minikube/addons for local assets ...
	I0806 00:14:46.193939   61720 filesync.go:126] Scanning /home/jenkins/minikube-integration/19373-9606/.minikube/files for local assets ...
	I0806 00:14:46.194050   61720 filesync.go:149] local asset: /home/jenkins/minikube-integration/19373-9606/.minikube/files/etc/ssl/certs/167922.pem -> 167922.pem in /etc/ssl/certs
	I0806 00:14:46.194181   61720 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0806 00:14:46.208786   61720 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19373-9606/.minikube/files/etc/ssl/certs/167922.pem --> /etc/ssl/certs/167922.pem (1708 bytes)
	I0806 00:14:46.234274   61720 start.go:296] duration metric: took 132.765664ms for postStartSetup
	I0806 00:14:46.234326   61720 main.go:141] libmachine: (kubernetes-upgrade-907863) Calling .GetConfigRaw
	I0806 00:14:46.234938   61720 main.go:141] libmachine: (kubernetes-upgrade-907863) Calling .GetIP
	I0806 00:14:46.237911   61720 main.go:141] libmachine: (kubernetes-upgrade-907863) DBG | domain kubernetes-upgrade-907863 has defined MAC address 52:54:00:f6:6f:99 in network mk-kubernetes-upgrade-907863
	I0806 00:14:46.238167   61720 main.go:141] libmachine: (kubernetes-upgrade-907863) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f6:6f:99", ip: ""} in network mk-kubernetes-upgrade-907863: {Iface:virbr4 ExpiryTime:2024-08-06 01:14:38 +0000 UTC Type:0 Mac:52:54:00:f6:6f:99 Iaid: IPaddr:192.168.72.112 Prefix:24 Hostname:kubernetes-upgrade-907863 Clientid:01:52:54:00:f6:6f:99}
	I0806 00:14:46.238204   61720 main.go:141] libmachine: (kubernetes-upgrade-907863) DBG | domain kubernetes-upgrade-907863 has defined IP address 192.168.72.112 and MAC address 52:54:00:f6:6f:99 in network mk-kubernetes-upgrade-907863
	I0806 00:14:46.238390   61720 profile.go:143] Saving config to /home/jenkins/minikube-integration/19373-9606/.minikube/profiles/kubernetes-upgrade-907863/config.json ...
	I0806 00:14:46.238584   61720 start.go:128] duration metric: took 23.774163741s to createHost
	I0806 00:14:46.238611   61720 main.go:141] libmachine: (kubernetes-upgrade-907863) Calling .GetSSHHostname
	I0806 00:14:46.240741   61720 main.go:141] libmachine: (kubernetes-upgrade-907863) DBG | domain kubernetes-upgrade-907863 has defined MAC address 52:54:00:f6:6f:99 in network mk-kubernetes-upgrade-907863
	I0806 00:14:46.241026   61720 main.go:141] libmachine: (kubernetes-upgrade-907863) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f6:6f:99", ip: ""} in network mk-kubernetes-upgrade-907863: {Iface:virbr4 ExpiryTime:2024-08-06 01:14:38 +0000 UTC Type:0 Mac:52:54:00:f6:6f:99 Iaid: IPaddr:192.168.72.112 Prefix:24 Hostname:kubernetes-upgrade-907863 Clientid:01:52:54:00:f6:6f:99}
	I0806 00:14:46.241051   61720 main.go:141] libmachine: (kubernetes-upgrade-907863) DBG | domain kubernetes-upgrade-907863 has defined IP address 192.168.72.112 and MAC address 52:54:00:f6:6f:99 in network mk-kubernetes-upgrade-907863
	I0806 00:14:46.241251   61720 main.go:141] libmachine: (kubernetes-upgrade-907863) Calling .GetSSHPort
	I0806 00:14:46.241413   61720 main.go:141] libmachine: (kubernetes-upgrade-907863) Calling .GetSSHKeyPath
	I0806 00:14:46.241580   61720 main.go:141] libmachine: (kubernetes-upgrade-907863) Calling .GetSSHKeyPath
	I0806 00:14:46.241731   61720 main.go:141] libmachine: (kubernetes-upgrade-907863) Calling .GetSSHUsername
	I0806 00:14:46.241879   61720 main.go:141] libmachine: Using SSH client type: native
	I0806 00:14:46.242047   61720 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.72.112 22 <nil> <nil>}
	I0806 00:14:46.242056   61720 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0806 00:14:46.343871   61720 main.go:141] libmachine: SSH cmd err, output: <nil>: 1722903286.327466870
	
	I0806 00:14:46.343890   61720 fix.go:216] guest clock: 1722903286.327466870
	I0806 00:14:46.343897   61720 fix.go:229] Guest: 2024-08-06 00:14:46.32746687 +0000 UTC Remote: 2024-08-06 00:14:46.238596191 +0000 UTC m=+34.462085673 (delta=88.870679ms)
	I0806 00:14:46.343917   61720 fix.go:200] guest clock delta is within tolerance: 88.870679ms
	I0806 00:14:46.343921   61720 start.go:83] releasing machines lock for "kubernetes-upgrade-907863", held for 23.87968491s
	I0806 00:14:46.343942   61720 main.go:141] libmachine: (kubernetes-upgrade-907863) Calling .DriverName
	I0806 00:14:46.344223   61720 main.go:141] libmachine: (kubernetes-upgrade-907863) Calling .GetIP
	I0806 00:14:46.346864   61720 main.go:141] libmachine: (kubernetes-upgrade-907863) DBG | domain kubernetes-upgrade-907863 has defined MAC address 52:54:00:f6:6f:99 in network mk-kubernetes-upgrade-907863
	I0806 00:14:46.347365   61720 main.go:141] libmachine: (kubernetes-upgrade-907863) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f6:6f:99", ip: ""} in network mk-kubernetes-upgrade-907863: {Iface:virbr4 ExpiryTime:2024-08-06 01:14:38 +0000 UTC Type:0 Mac:52:54:00:f6:6f:99 Iaid: IPaddr:192.168.72.112 Prefix:24 Hostname:kubernetes-upgrade-907863 Clientid:01:52:54:00:f6:6f:99}
	I0806 00:14:46.347401   61720 main.go:141] libmachine: (kubernetes-upgrade-907863) DBG | domain kubernetes-upgrade-907863 has defined IP address 192.168.72.112 and MAC address 52:54:00:f6:6f:99 in network mk-kubernetes-upgrade-907863
	I0806 00:14:46.347607   61720 main.go:141] libmachine: (kubernetes-upgrade-907863) Calling .DriverName
	I0806 00:14:46.348173   61720 main.go:141] libmachine: (kubernetes-upgrade-907863) Calling .DriverName
	I0806 00:14:46.348392   61720 main.go:141] libmachine: (kubernetes-upgrade-907863) Calling .DriverName
	I0806 00:14:46.348501   61720 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0806 00:14:46.348545   61720 main.go:141] libmachine: (kubernetes-upgrade-907863) Calling .GetSSHHostname
	I0806 00:14:46.348623   61720 ssh_runner.go:195] Run: cat /version.json
	I0806 00:14:46.348646   61720 main.go:141] libmachine: (kubernetes-upgrade-907863) Calling .GetSSHHostname
	I0806 00:14:46.351386   61720 main.go:141] libmachine: (kubernetes-upgrade-907863) DBG | domain kubernetes-upgrade-907863 has defined MAC address 52:54:00:f6:6f:99 in network mk-kubernetes-upgrade-907863
	I0806 00:14:46.351533   61720 main.go:141] libmachine: (kubernetes-upgrade-907863) DBG | domain kubernetes-upgrade-907863 has defined MAC address 52:54:00:f6:6f:99 in network mk-kubernetes-upgrade-907863
	I0806 00:14:46.351746   61720 main.go:141] libmachine: (kubernetes-upgrade-907863) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f6:6f:99", ip: ""} in network mk-kubernetes-upgrade-907863: {Iface:virbr4 ExpiryTime:2024-08-06 01:14:38 +0000 UTC Type:0 Mac:52:54:00:f6:6f:99 Iaid: IPaddr:192.168.72.112 Prefix:24 Hostname:kubernetes-upgrade-907863 Clientid:01:52:54:00:f6:6f:99}
	I0806 00:14:46.351771   61720 main.go:141] libmachine: (kubernetes-upgrade-907863) DBG | domain kubernetes-upgrade-907863 has defined IP address 192.168.72.112 and MAC address 52:54:00:f6:6f:99 in network mk-kubernetes-upgrade-907863
	I0806 00:14:46.351895   61720 main.go:141] libmachine: (kubernetes-upgrade-907863) Calling .GetSSHPort
	I0806 00:14:46.352002   61720 main.go:141] libmachine: (kubernetes-upgrade-907863) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f6:6f:99", ip: ""} in network mk-kubernetes-upgrade-907863: {Iface:virbr4 ExpiryTime:2024-08-06 01:14:38 +0000 UTC Type:0 Mac:52:54:00:f6:6f:99 Iaid: IPaddr:192.168.72.112 Prefix:24 Hostname:kubernetes-upgrade-907863 Clientid:01:52:54:00:f6:6f:99}
	I0806 00:14:46.352025   61720 main.go:141] libmachine: (kubernetes-upgrade-907863) DBG | domain kubernetes-upgrade-907863 has defined IP address 192.168.72.112 and MAC address 52:54:00:f6:6f:99 in network mk-kubernetes-upgrade-907863
	I0806 00:14:46.352047   61720 main.go:141] libmachine: (kubernetes-upgrade-907863) Calling .GetSSHKeyPath
	I0806 00:14:46.352224   61720 main.go:141] libmachine: (kubernetes-upgrade-907863) Calling .GetSSHPort
	I0806 00:14:46.352225   61720 main.go:141] libmachine: (kubernetes-upgrade-907863) Calling .GetSSHUsername
	I0806 00:14:46.352398   61720 main.go:141] libmachine: (kubernetes-upgrade-907863) Calling .GetSSHKeyPath
	I0806 00:14:46.352410   61720 sshutil.go:53] new ssh client: &{IP:192.168.72.112 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19373-9606/.minikube/machines/kubernetes-upgrade-907863/id_rsa Username:docker}
	I0806 00:14:46.352531   61720 main.go:141] libmachine: (kubernetes-upgrade-907863) Calling .GetSSHUsername
	I0806 00:14:46.352676   61720 sshutil.go:53] new ssh client: &{IP:192.168.72.112 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19373-9606/.minikube/machines/kubernetes-upgrade-907863/id_rsa Username:docker}
	I0806 00:14:46.432018   61720 ssh_runner.go:195] Run: systemctl --version
	I0806 00:14:46.454693   61720 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0806 00:14:46.628607   61720 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0806 00:14:46.638675   61720 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0806 00:14:46.638749   61720 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0806 00:14:46.659007   61720 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0806 00:14:46.659034   61720 start.go:495] detecting cgroup driver to use...
	I0806 00:14:46.659142   61720 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0806 00:14:46.680151   61720 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0806 00:14:46.698336   61720 docker.go:217] disabling cri-docker service (if available) ...
	I0806 00:14:46.698504   61720 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0806 00:14:46.715093   61720 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0806 00:14:46.730157   61720 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0806 00:14:46.849640   61720 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0806 00:14:47.007622   61720 docker.go:233] disabling docker service ...
	I0806 00:14:47.007694   61720 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0806 00:14:47.022913   61720 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0806 00:14:47.037788   61720 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0806 00:14:47.172160   61720 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0806 00:14:47.297771   61720 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0806 00:14:47.315774   61720 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0806 00:14:47.335893   61720 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.2" pause image...
	I0806 00:14:47.335977   61720 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.2"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0806 00:14:47.350348   61720 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0806 00:14:47.350417   61720 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0806 00:14:47.362187   61720 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0806 00:14:47.375760   61720 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0806 00:14:47.388776   61720 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0806 00:14:47.401761   61720 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0806 00:14:47.412720   61720 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0806 00:14:47.412787   61720 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0806 00:14:47.428644   61720 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0806 00:14:47.440189   61720 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0806 00:14:47.553614   61720 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0806 00:14:47.698481   61720 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0806 00:14:47.698569   61720 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0806 00:14:47.703748   61720 start.go:563] Will wait 60s for crictl version
	I0806 00:14:47.703812   61720 ssh_runner.go:195] Run: which crictl
	I0806 00:14:47.708040   61720 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0806 00:14:47.749798   61720 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0806 00:14:47.749884   61720 ssh_runner.go:195] Run: crio --version
	I0806 00:14:47.779166   61720 ssh_runner.go:195] Run: crio --version
	I0806 00:14:47.812309   61720 out.go:177] * Preparing Kubernetes v1.20.0 on CRI-O 1.29.1 ...
	I0806 00:14:47.813708   61720 main.go:141] libmachine: (kubernetes-upgrade-907863) Calling .GetIP
	I0806 00:14:47.816108   61720 main.go:141] libmachine: (kubernetes-upgrade-907863) DBG | domain kubernetes-upgrade-907863 has defined MAC address 52:54:00:f6:6f:99 in network mk-kubernetes-upgrade-907863
	I0806 00:14:47.816440   61720 main.go:141] libmachine: (kubernetes-upgrade-907863) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f6:6f:99", ip: ""} in network mk-kubernetes-upgrade-907863: {Iface:virbr4 ExpiryTime:2024-08-06 01:14:38 +0000 UTC Type:0 Mac:52:54:00:f6:6f:99 Iaid: IPaddr:192.168.72.112 Prefix:24 Hostname:kubernetes-upgrade-907863 Clientid:01:52:54:00:f6:6f:99}
	I0806 00:14:47.816466   61720 main.go:141] libmachine: (kubernetes-upgrade-907863) DBG | domain kubernetes-upgrade-907863 has defined IP address 192.168.72.112 and MAC address 52:54:00:f6:6f:99 in network mk-kubernetes-upgrade-907863
	I0806 00:14:47.816644   61720 ssh_runner.go:195] Run: grep 192.168.72.1	host.minikube.internal$ /etc/hosts
	I0806 00:14:47.821182   61720 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.72.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0806 00:14:47.834308   61720 kubeadm.go:883] updating cluster {Name:kubernetes-upgrade-907863 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19339/minikube-v1.33.1-1722248113-19339-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVe
rsion:v1.20.0 ClusterName:kubernetes-upgrade-907863 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.72.112 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimi
zations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0806 00:14:47.834420   61720 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime crio
	I0806 00:14:47.834474   61720 ssh_runner.go:195] Run: sudo crictl images --output json
	I0806 00:14:47.868197   61720 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.20.0". assuming images are not preloaded.
	I0806 00:14:47.868274   61720 ssh_runner.go:195] Run: which lz4
	I0806 00:14:47.872506   61720 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I0806 00:14:47.877108   61720 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0806 00:14:47.877144   61720 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19373-9606/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (473237281 bytes)
	I0806 00:14:49.556736   61720 crio.go:462] duration metric: took 1.684254918s to copy over tarball
	I0806 00:14:49.556831   61720 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0806 00:14:52.132666   61720 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.575793522s)
	I0806 00:14:52.132708   61720 crio.go:469] duration metric: took 2.575934958s to extract the tarball
	I0806 00:14:52.132718   61720 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0806 00:14:52.178655   61720 ssh_runner.go:195] Run: sudo crictl images --output json
	I0806 00:14:52.228379   61720 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.20.0". assuming images are not preloaded.
	I0806 00:14:52.228410   61720 cache_images.go:88] LoadCachedImages start: [registry.k8s.io/kube-apiserver:v1.20.0 registry.k8s.io/kube-controller-manager:v1.20.0 registry.k8s.io/kube-scheduler:v1.20.0 registry.k8s.io/kube-proxy:v1.20.0 registry.k8s.io/pause:3.2 registry.k8s.io/etcd:3.4.13-0 registry.k8s.io/coredns:1.7.0 gcr.io/k8s-minikube/storage-provisioner:v5]
	I0806 00:14:52.228492   61720 image.go:134] retrieving image: registry.k8s.io/kube-proxy:v1.20.0
	I0806 00:14:52.228495   61720 image.go:134] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0806 00:14:52.228503   61720 image.go:134] retrieving image: registry.k8s.io/etcd:3.4.13-0
	I0806 00:14:52.228568   61720 image.go:134] retrieving image: registry.k8s.io/coredns:1.7.0
	I0806 00:14:52.228594   61720 image.go:134] retrieving image: registry.k8s.io/kube-scheduler:v1.20.0
	I0806 00:14:52.228592   61720 image.go:134] retrieving image: registry.k8s.io/pause:3.2
	I0806 00:14:52.228636   61720 image.go:134] retrieving image: registry.k8s.io/kube-apiserver:v1.20.0
	I0806 00:14:52.228641   61720 image.go:134] retrieving image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0806 00:14:52.229894   61720 image.go:177] daemon lookup for registry.k8s.io/kube-apiserver:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.20.0
	I0806 00:14:52.229923   61720 image.go:177] daemon lookup for registry.k8s.io/etcd:3.4.13-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.4.13-0
	I0806 00:14:52.229926   61720 image.go:177] daemon lookup for registry.k8s.io/pause:3.2: Error response from daemon: No such image: registry.k8s.io/pause:3.2
	I0806 00:14:52.229893   61720 image.go:177] daemon lookup for registry.k8s.io/coredns:1.7.0: Error response from daemon: No such image: registry.k8s.io/coredns:1.7.0
	I0806 00:14:52.229939   61720 image.go:177] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0806 00:14:52.229949   61720 image.go:177] daemon lookup for registry.k8s.io/kube-scheduler:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.20.0
	I0806 00:14:52.229901   61720 image.go:177] daemon lookup for registry.k8s.io/kube-proxy:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.20.0
	I0806 00:14:52.229956   61720 image.go:177] daemon lookup for registry.k8s.io/kube-controller-manager:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0806 00:14:52.369603   61720 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/etcd:3.4.13-0
	I0806 00:14:52.373690   61720 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/pause:3.2
	I0806 00:14:52.419211   61720 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.20.0
	I0806 00:14:52.421008   61720 cache_images.go:116] "registry.k8s.io/etcd:3.4.13-0" needs transfer: "registry.k8s.io/etcd:3.4.13-0" does not exist at hash "0369cf4303ffdb467dc219990960a9baa8512a54b0ad9283eaf55bd6c0adb934" in container runtime
	I0806 00:14:52.421051   61720 cri.go:218] Removing image: registry.k8s.io/etcd:3.4.13-0
	I0806 00:14:52.421110   61720 ssh_runner.go:195] Run: which crictl
	I0806 00:14:52.437302   61720 cache_images.go:116] "registry.k8s.io/pause:3.2" needs transfer: "registry.k8s.io/pause:3.2" does not exist at hash "80d28bedfe5dec59da9ebf8e6260224ac9008ab5c11dbbe16ee3ba3e4439ac2c" in container runtime
	I0806 00:14:52.437345   61720 cri.go:218] Removing image: registry.k8s.io/pause:3.2
	I0806 00:14:52.437393   61720 ssh_runner.go:195] Run: which crictl
	I0806 00:14:52.449950   61720 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.20.0
	I0806 00:14:52.469829   61720 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.4.13-0
	I0806 00:14:52.469876   61720 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.2
	I0806 00:14:52.470021   61720 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.20.0" needs transfer: "registry.k8s.io/kube-controller-manager:v1.20.0" does not exist at hash "b9fa1895dcaa6d3dd241d6d9340e939ca30fc0946464ec9f205a8cbe738a8080" in container runtime
	I0806 00:14:52.470060   61720 cri.go:218] Removing image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0806 00:14:52.470095   61720 ssh_runner.go:195] Run: which crictl
	I0806 00:14:52.544769   61720 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19373-9606/.minikube/cache/images/amd64/registry.k8s.io/pause_3.2
	I0806 00:14:52.545022   61720 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.20.0" needs transfer: "registry.k8s.io/kube-proxy:v1.20.0" does not exist at hash "10cc881966cfd9287656c2fce1f144625602653d1e8b011487a7a71feb100bdc" in container runtime
	I0806 00:14:52.545061   61720 cri.go:218] Removing image: registry.k8s.io/kube-proxy:v1.20.0
	I0806 00:14:52.545119   61720 ssh_runner.go:195] Run: which crictl
	I0806 00:14:52.557704   61720 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.20.0
	I0806 00:14:52.557713   61720 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.20.0
	I0806 00:14:52.557759   61720 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19373-9606/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.4.13-0
	I0806 00:14:52.575840   61720 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.20.0
	I0806 00:14:52.598803   61720 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.20.0
	I0806 00:14:52.633351   61720 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19373-9606/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.20.0
	I0806 00:14:52.633411   61720 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19373-9606/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.20.0
	I0806 00:14:52.640820   61720 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/coredns:1.7.0
	I0806 00:14:52.664322   61720 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.20.0" needs transfer: "registry.k8s.io/kube-apiserver:v1.20.0" does not exist at hash "ca9843d3b545457f24b012d6d579ba85f132f2406aa171ad84d53caa55e5de99" in container runtime
	I0806 00:14:52.664375   61720 cri.go:218] Removing image: registry.k8s.io/kube-apiserver:v1.20.0
	I0806 00:14:52.664433   61720 ssh_runner.go:195] Run: which crictl
	I0806 00:14:52.686664   61720 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.20.0" needs transfer: "registry.k8s.io/kube-scheduler:v1.20.0" does not exist at hash "3138b6e3d471224fd516f758f3b53309219bcb6824e07686b3cd60d78012c899" in container runtime
	I0806 00:14:52.686704   61720 cri.go:218] Removing image: registry.k8s.io/kube-scheduler:v1.20.0
	I0806 00:14:52.686751   61720 ssh_runner.go:195] Run: which crictl
	I0806 00:14:52.710267   61720 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.20.0
	I0806 00:14:52.710288   61720 cache_images.go:116] "registry.k8s.io/coredns:1.7.0" needs transfer: "registry.k8s.io/coredns:1.7.0" does not exist at hash "bfe3a36ebd2528b454be6aebece806db5b40407b833e2af9617bf39afaff8c16" in container runtime
	I0806 00:14:52.710295   61720 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.20.0
	I0806 00:14:52.710322   61720 cri.go:218] Removing image: registry.k8s.io/coredns:1.7.0
	I0806 00:14:52.710348   61720 ssh_runner.go:195] Run: which crictl
	I0806 00:14:52.753209   61720 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.7.0
	I0806 00:14:52.773045   61720 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19373-9606/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.20.0
	I0806 00:14:52.773045   61720 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19373-9606/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.20.0
	I0806 00:14:52.794936   61720 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19373-9606/.minikube/cache/images/amd64/registry.k8s.io/coredns_1.7.0
	I0806 00:14:53.168413   61720 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I0806 00:14:53.311791   61720 cache_images.go:92] duration metric: took 1.083360411s to LoadCachedImages
	W0806 00:14:53.311894   61720 out.go:239] X Unable to load cached images: LoadCachedImages: stat /home/jenkins/minikube-integration/19373-9606/.minikube/cache/images/amd64/registry.k8s.io/pause_3.2: no such file or directory
	X Unable to load cached images: LoadCachedImages: stat /home/jenkins/minikube-integration/19373-9606/.minikube/cache/images/amd64/registry.k8s.io/pause_3.2: no such file or directory
	I0806 00:14:53.311912   61720 kubeadm.go:934] updating node { 192.168.72.112 8443 v1.20.0 crio true true} ...
	I0806 00:14:53.312034   61720 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.20.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime=remote --container-runtime-endpoint=unix:///var/run/crio/crio.sock --hostname-override=kubernetes-upgrade-907863 --kubeconfig=/etc/kubernetes/kubelet.conf --network-plugin=cni --node-ip=192.168.72.112
	
	[Install]
	 config:
	{KubernetesVersion:v1.20.0 ClusterName:kubernetes-upgrade-907863 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0806 00:14:53.312108   61720 ssh_runner.go:195] Run: crio config
	I0806 00:14:53.380642   61720 cni.go:84] Creating CNI manager for ""
	I0806 00:14:53.380662   61720 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0806 00:14:53.380674   61720 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0806 00:14:53.380698   61720 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.72.112 APIServerPort:8443 KubernetesVersion:v1.20.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:kubernetes-upgrade-907863 NodeName:kubernetes-upgrade-907863 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.72.112"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.72.112 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.
crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:false}
	I0806 00:14:53.380923   61720 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.72.112
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: /var/run/crio/crio.sock
	  name: "kubernetes-upgrade-907863"
	  kubeletExtraArgs:
	    node-ip: 192.168.72.112
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.72.112"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	dns:
	  type: CoreDNS
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.20.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0806 00:14:53.380997   61720 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.20.0
	I0806 00:14:53.395339   61720 binaries.go:44] Found k8s binaries, skipping transfer
	I0806 00:14:53.395423   61720 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0806 00:14:53.411555   61720 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (433 bytes)
	I0806 00:14:53.433132   61720 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0806 00:14:53.455825   61720 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2126 bytes)
	I0806 00:14:53.476294   61720 ssh_runner.go:195] Run: grep 192.168.72.112	control-plane.minikube.internal$ /etc/hosts
	I0806 00:14:53.480668   61720 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.72.112	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0806 00:14:53.499600   61720 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0806 00:14:53.652974   61720 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0806 00:14:53.677860   61720 certs.go:68] Setting up /home/jenkins/minikube-integration/19373-9606/.minikube/profiles/kubernetes-upgrade-907863 for IP: 192.168.72.112
	I0806 00:14:53.677891   61720 certs.go:194] generating shared ca certs ...
	I0806 00:14:53.677911   61720 certs.go:226] acquiring lock for ca certs: {Name:mkf35a042c1656d191f542eee7fa087aad4d29d8 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0806 00:14:53.678068   61720 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19373-9606/.minikube/ca.key
	I0806 00:14:53.678134   61720 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19373-9606/.minikube/proxy-client-ca.key
	I0806 00:14:53.678149   61720 certs.go:256] generating profile certs ...
	I0806 00:14:53.678226   61720 certs.go:363] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/19373-9606/.minikube/profiles/kubernetes-upgrade-907863/client.key
	I0806 00:14:53.678247   61720 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19373-9606/.minikube/profiles/kubernetes-upgrade-907863/client.crt with IP's: []
	I0806 00:14:53.891591   61720 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19373-9606/.minikube/profiles/kubernetes-upgrade-907863/client.crt ...
	I0806 00:14:53.891629   61720 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19373-9606/.minikube/profiles/kubernetes-upgrade-907863/client.crt: {Name:mka73080179836a3e5f00f6563ab46864f07d0b2 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0806 00:14:53.891808   61720 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19373-9606/.minikube/profiles/kubernetes-upgrade-907863/client.key ...
	I0806 00:14:53.891824   61720 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19373-9606/.minikube/profiles/kubernetes-upgrade-907863/client.key: {Name:mka33cfcfc39b86c3df16be006a98c42ce1b23f2 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0806 00:14:53.891911   61720 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/19373-9606/.minikube/profiles/kubernetes-upgrade-907863/apiserver.key.777d71ca
	I0806 00:14:53.891933   61720 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19373-9606/.minikube/profiles/kubernetes-upgrade-907863/apiserver.crt.777d71ca with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.72.112]
	I0806 00:14:54.037095   61720 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19373-9606/.minikube/profiles/kubernetes-upgrade-907863/apiserver.crt.777d71ca ...
	I0806 00:14:54.037146   61720 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19373-9606/.minikube/profiles/kubernetes-upgrade-907863/apiserver.crt.777d71ca: {Name:mkdbd1ad9bf1e099ce927cbbd16ee9537c57abec Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0806 00:14:54.037338   61720 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19373-9606/.minikube/profiles/kubernetes-upgrade-907863/apiserver.key.777d71ca ...
	I0806 00:14:54.037353   61720 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19373-9606/.minikube/profiles/kubernetes-upgrade-907863/apiserver.key.777d71ca: {Name:mke232de9779080cad9e9caed41be9d6d22833d6 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0806 00:14:54.037428   61720 certs.go:381] copying /home/jenkins/minikube-integration/19373-9606/.minikube/profiles/kubernetes-upgrade-907863/apiserver.crt.777d71ca -> /home/jenkins/minikube-integration/19373-9606/.minikube/profiles/kubernetes-upgrade-907863/apiserver.crt
	I0806 00:14:54.037527   61720 certs.go:385] copying /home/jenkins/minikube-integration/19373-9606/.minikube/profiles/kubernetes-upgrade-907863/apiserver.key.777d71ca -> /home/jenkins/minikube-integration/19373-9606/.minikube/profiles/kubernetes-upgrade-907863/apiserver.key
	I0806 00:14:54.037593   61720 certs.go:363] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/19373-9606/.minikube/profiles/kubernetes-upgrade-907863/proxy-client.key
	I0806 00:14:54.037611   61720 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19373-9606/.minikube/profiles/kubernetes-upgrade-907863/proxy-client.crt with IP's: []
	I0806 00:14:54.104925   61720 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19373-9606/.minikube/profiles/kubernetes-upgrade-907863/proxy-client.crt ...
	I0806 00:14:54.104968   61720 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19373-9606/.minikube/profiles/kubernetes-upgrade-907863/proxy-client.crt: {Name:mk963f01277aaeaa47218702211ab49a2a05b2d7 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0806 00:14:54.158476   61720 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19373-9606/.minikube/profiles/kubernetes-upgrade-907863/proxy-client.key ...
	I0806 00:14:54.158516   61720 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19373-9606/.minikube/profiles/kubernetes-upgrade-907863/proxy-client.key: {Name:mk3b859fbef7364d8f865e5e69cf276e01b899be Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0806 00:14:54.158797   61720 certs.go:484] found cert: /home/jenkins/minikube-integration/19373-9606/.minikube/certs/16792.pem (1338 bytes)
	W0806 00:14:54.158850   61720 certs.go:480] ignoring /home/jenkins/minikube-integration/19373-9606/.minikube/certs/16792_empty.pem, impossibly tiny 0 bytes
	I0806 00:14:54.158864   61720 certs.go:484] found cert: /home/jenkins/minikube-integration/19373-9606/.minikube/certs/ca-key.pem (1679 bytes)
	I0806 00:14:54.158896   61720 certs.go:484] found cert: /home/jenkins/minikube-integration/19373-9606/.minikube/certs/ca.pem (1082 bytes)
	I0806 00:14:54.158952   61720 certs.go:484] found cert: /home/jenkins/minikube-integration/19373-9606/.minikube/certs/cert.pem (1123 bytes)
	I0806 00:14:54.158997   61720 certs.go:484] found cert: /home/jenkins/minikube-integration/19373-9606/.minikube/certs/key.pem (1679 bytes)
	I0806 00:14:54.159081   61720 certs.go:484] found cert: /home/jenkins/minikube-integration/19373-9606/.minikube/files/etc/ssl/certs/167922.pem (1708 bytes)
	I0806 00:14:54.159895   61720 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19373-9606/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0806 00:14:54.189387   61720 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19373-9606/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0806 00:14:54.217878   61720 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19373-9606/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0806 00:14:54.247509   61720 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19373-9606/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0806 00:14:54.276131   61720 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19373-9606/.minikube/profiles/kubernetes-upgrade-907863/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1436 bytes)
	I0806 00:14:54.306893   61720 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19373-9606/.minikube/profiles/kubernetes-upgrade-907863/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0806 00:14:54.334293   61720 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19373-9606/.minikube/profiles/kubernetes-upgrade-907863/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0806 00:14:54.362197   61720 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19373-9606/.minikube/profiles/kubernetes-upgrade-907863/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0806 00:14:54.392159   61720 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19373-9606/.minikube/files/etc/ssl/certs/167922.pem --> /usr/share/ca-certificates/167922.pem (1708 bytes)
	I0806 00:14:54.420680   61720 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19373-9606/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0806 00:14:54.456455   61720 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19373-9606/.minikube/certs/16792.pem --> /usr/share/ca-certificates/16792.pem (1338 bytes)
	I0806 00:14:54.486139   61720 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0806 00:14:54.504953   61720 ssh_runner.go:195] Run: openssl version
	I0806 00:14:54.511853   61720 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0806 00:14:54.524862   61720 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0806 00:14:54.529585   61720 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Aug  5 22:50 /usr/share/ca-certificates/minikubeCA.pem
	I0806 00:14:54.529642   61720 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0806 00:14:54.535690   61720 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0806 00:14:54.550055   61720 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/16792.pem && ln -fs /usr/share/ca-certificates/16792.pem /etc/ssl/certs/16792.pem"
	I0806 00:14:54.572183   61720 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/16792.pem
	I0806 00:14:54.582553   61720 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Aug  5 23:03 /usr/share/ca-certificates/16792.pem
	I0806 00:14:54.582619   61720 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/16792.pem
	I0806 00:14:54.590967   61720 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/16792.pem /etc/ssl/certs/51391683.0"
	I0806 00:14:54.615850   61720 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/167922.pem && ln -fs /usr/share/ca-certificates/167922.pem /etc/ssl/certs/167922.pem"
	I0806 00:14:54.636139   61720 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/167922.pem
	I0806 00:14:54.641803   61720 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Aug  5 23:03 /usr/share/ca-certificates/167922.pem
	I0806 00:14:54.641870   61720 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/167922.pem
	I0806 00:14:54.648958   61720 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/167922.pem /etc/ssl/certs/3ec20f2e.0"
	I0806 00:14:54.664107   61720 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0806 00:14:54.669336   61720 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0806 00:14:54.669395   61720 kubeadm.go:392] StartCluster: {Name:kubernetes-upgrade-907863 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19339/minikube-v1.33.1-1722248113-19339-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersi
on:v1.20.0 ClusterName:kubernetes-upgrade-907863 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.72.112 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizat
ions:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0806 00:14:54.669544   61720 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0806 00:14:54.669609   61720 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0806 00:14:54.719934   61720 cri.go:89] found id: ""
	I0806 00:14:54.719996   61720 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0806 00:14:54.732226   61720 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0806 00:14:54.743958   61720 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0806 00:14:54.754033   61720 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0806 00:14:54.754060   61720 kubeadm.go:157] found existing configuration files:
	
	I0806 00:14:54.754116   61720 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0806 00:14:54.763793   61720 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0806 00:14:54.763871   61720 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0806 00:14:54.774255   61720 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0806 00:14:54.784427   61720 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0806 00:14:54.784499   61720 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0806 00:14:54.796822   61720 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0806 00:14:54.807691   61720 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0806 00:14:54.807751   61720 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0806 00:14:54.818222   61720 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0806 00:14:54.830068   61720 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0806 00:14:54.830140   61720 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0806 00:14:54.841016   61720 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0806 00:14:55.148580   61720 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0806 00:16:53.319023   61720 kubeadm.go:310] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I0806 00:16:53.319206   61720 kubeadm.go:310] To see the stack trace of this error execute with --v=5 or higher
	I0806 00:16:53.320714   61720 kubeadm.go:310] [init] Using Kubernetes version: v1.20.0
	I0806 00:16:53.320774   61720 kubeadm.go:310] [preflight] Running pre-flight checks
	I0806 00:16:53.320870   61720 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0806 00:16:53.321037   61720 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0806 00:16:53.321212   61720 kubeadm.go:310] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0806 00:16:53.321316   61720 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0806 00:16:53.323658   61720 out.go:204]   - Generating certificates and keys ...
	I0806 00:16:53.323752   61720 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0806 00:16:53.323817   61720 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0806 00:16:53.323908   61720 kubeadm.go:310] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0806 00:16:53.323986   61720 kubeadm.go:310] [certs] Generating "front-proxy-ca" certificate and key
	I0806 00:16:53.324086   61720 kubeadm.go:310] [certs] Generating "front-proxy-client" certificate and key
	I0806 00:16:53.324190   61720 kubeadm.go:310] [certs] Generating "etcd/ca" certificate and key
	I0806 00:16:53.324272   61720 kubeadm.go:310] [certs] Generating "etcd/server" certificate and key
	I0806 00:16:53.324462   61720 kubeadm.go:310] [certs] etcd/server serving cert is signed for DNS names [kubernetes-upgrade-907863 localhost] and IPs [192.168.72.112 127.0.0.1 ::1]
	I0806 00:16:53.324567   61720 kubeadm.go:310] [certs] Generating "etcd/peer" certificate and key
	I0806 00:16:53.324783   61720 kubeadm.go:310] [certs] etcd/peer serving cert is signed for DNS names [kubernetes-upgrade-907863 localhost] and IPs [192.168.72.112 127.0.0.1 ::1]
	I0806 00:16:53.324861   61720 kubeadm.go:310] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0806 00:16:53.325013   61720 kubeadm.go:310] [certs] Generating "apiserver-etcd-client" certificate and key
	I0806 00:16:53.325090   61720 kubeadm.go:310] [certs] Generating "sa" key and public key
	I0806 00:16:53.325182   61720 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0806 00:16:53.325275   61720 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0806 00:16:53.325350   61720 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0806 00:16:53.325424   61720 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0806 00:16:53.325476   61720 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0806 00:16:53.325585   61720 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0806 00:16:53.325698   61720 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0806 00:16:53.325759   61720 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0806 00:16:53.325839   61720 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0806 00:16:53.327178   61720 out.go:204]   - Booting up control plane ...
	I0806 00:16:53.327290   61720 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0806 00:16:53.327397   61720 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0806 00:16:53.327501   61720 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0806 00:16:53.327622   61720 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0806 00:16:53.327765   61720 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0806 00:16:53.327834   61720 kubeadm.go:310] [kubelet-check] Initial timeout of 40s passed.
	I0806 00:16:53.327930   61720 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0806 00:16:53.328124   61720 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0806 00:16:53.328221   61720 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0806 00:16:53.328386   61720 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0806 00:16:53.328450   61720 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0806 00:16:53.328594   61720 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0806 00:16:53.328671   61720 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0806 00:16:53.328883   61720 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0806 00:16:53.328968   61720 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0806 00:16:53.329134   61720 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0806 00:16:53.329145   61720 kubeadm.go:310] 
	I0806 00:16:53.329199   61720 kubeadm.go:310] 	Unfortunately, an error has occurred:
	I0806 00:16:53.329249   61720 kubeadm.go:310] 		timed out waiting for the condition
	I0806 00:16:53.329257   61720 kubeadm.go:310] 
	I0806 00:16:53.329283   61720 kubeadm.go:310] 	This error is likely caused by:
	I0806 00:16:53.329311   61720 kubeadm.go:310] 		- The kubelet is not running
	I0806 00:16:53.329439   61720 kubeadm.go:310] 		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I0806 00:16:53.329448   61720 kubeadm.go:310] 
	I0806 00:16:53.329525   61720 kubeadm.go:310] 	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I0806 00:16:53.329551   61720 kubeadm.go:310] 		- 'systemctl status kubelet'
	I0806 00:16:53.329577   61720 kubeadm.go:310] 		- 'journalctl -xeu kubelet'
	I0806 00:16:53.329587   61720 kubeadm.go:310] 
	I0806 00:16:53.329673   61720 kubeadm.go:310] 	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I0806 00:16:53.329736   61720 kubeadm.go:310] 	To troubleshoot, list all containers using your preferred container runtimes CLI.
	I0806 00:16:53.329742   61720 kubeadm.go:310] 
	I0806 00:16:53.329824   61720 kubeadm.go:310] 	Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
	I0806 00:16:53.329897   61720 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I0806 00:16:53.329978   61720 kubeadm.go:310] 		Once you have found the failing container, you can inspect its logs with:
	I0806 00:16:53.330060   61720 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	I0806 00:16:53.330094   61720 kubeadm.go:310] 
	W0806 00:16:53.330174   61720 out.go:239] ! initialization failed, will try again: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Generating "apiserver-kubelet-client" certificate and key
	[certs] Generating "front-proxy-ca" certificate and key
	[certs] Generating "front-proxy-client" certificate and key
	[certs] Generating "etcd/ca" certificate and key
	[certs] Generating "etcd/server" certificate and key
	[certs] etcd/server serving cert is signed for DNS names [kubernetes-upgrade-907863 localhost] and IPs [192.168.72.112 127.0.0.1 ::1]
	[certs] Generating "etcd/peer" certificate and key
	[certs] etcd/peer serving cert is signed for DNS names [kubernetes-upgrade-907863 localhost] and IPs [192.168.72.112 127.0.0.1 ::1]
	[certs] Generating "etcd/healthcheck-client" certificate and key
	[certs] Generating "apiserver-etcd-client" certificate and key
	[certs] Generating "sa" key and public key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	! initialization failed, will try again: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Generating "apiserver-kubelet-client" certificate and key
	[certs] Generating "front-proxy-ca" certificate and key
	[certs] Generating "front-proxy-client" certificate and key
	[certs] Generating "etcd/ca" certificate and key
	[certs] Generating "etcd/server" certificate and key
	[certs] etcd/server serving cert is signed for DNS names [kubernetes-upgrade-907863 localhost] and IPs [192.168.72.112 127.0.0.1 ::1]
	[certs] Generating "etcd/peer" certificate and key
	[certs] etcd/peer serving cert is signed for DNS names [kubernetes-upgrade-907863 localhost] and IPs [192.168.72.112 127.0.0.1 ::1]
	[certs] Generating "etcd/healthcheck-client" certificate and key
	[certs] Generating "apiserver-etcd-client" certificate and key
	[certs] Generating "sa" key and public key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	I0806 00:16:53.330216   61720 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0806 00:16:55.233369   61720 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (1.903129847s)
	I0806 00:16:55.233446   61720 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0806 00:16:55.248439   61720 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0806 00:16:55.258867   61720 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0806 00:16:55.258896   61720 kubeadm.go:157] found existing configuration files:
	
	I0806 00:16:55.258960   61720 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0806 00:16:55.268950   61720 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0806 00:16:55.269015   61720 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0806 00:16:55.279498   61720 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0806 00:16:55.289473   61720 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0806 00:16:55.289541   61720 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0806 00:16:55.300212   61720 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0806 00:16:55.310453   61720 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0806 00:16:55.310516   61720 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0806 00:16:55.320937   61720 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0806 00:16:55.330356   61720 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0806 00:16:55.330415   61720 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0806 00:16:55.340361   61720 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0806 00:16:55.557997   61720 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0806 00:18:51.535308   61720 kubeadm.go:310] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I0806 00:18:51.535436   61720 kubeadm.go:310] To see the stack trace of this error execute with --v=5 or higher
	I0806 00:18:51.536898   61720 kubeadm.go:310] [init] Using Kubernetes version: v1.20.0
	I0806 00:18:51.536977   61720 kubeadm.go:310] [preflight] Running pre-flight checks
	I0806 00:18:51.537072   61720 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0806 00:18:51.537200   61720 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0806 00:18:51.537335   61720 kubeadm.go:310] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0806 00:18:51.537433   61720 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0806 00:18:51.539206   61720 out.go:204]   - Generating certificates and keys ...
	I0806 00:18:51.539302   61720 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0806 00:18:51.539389   61720 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0806 00:18:51.539511   61720 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0806 00:18:51.539605   61720 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I0806 00:18:51.539698   61720 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I0806 00:18:51.539769   61720 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I0806 00:18:51.539871   61720 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I0806 00:18:51.539954   61720 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I0806 00:18:51.540024   61720 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0806 00:18:51.540090   61720 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0806 00:18:51.540123   61720 kubeadm.go:310] [certs] Using the existing "sa" key
	I0806 00:18:51.540170   61720 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0806 00:18:51.540220   61720 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0806 00:18:51.540268   61720 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0806 00:18:51.540355   61720 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0806 00:18:51.540442   61720 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0806 00:18:51.540594   61720 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0806 00:18:51.540720   61720 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0806 00:18:51.540778   61720 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0806 00:18:51.540872   61720 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0806 00:18:51.542447   61720 out.go:204]   - Booting up control plane ...
	I0806 00:18:51.542535   61720 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0806 00:18:51.542614   61720 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0806 00:18:51.542693   61720 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0806 00:18:51.542792   61720 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0806 00:18:51.543024   61720 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0806 00:18:51.543111   61720 kubeadm.go:310] [kubelet-check] Initial timeout of 40s passed.
	I0806 00:18:51.543188   61720 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0806 00:18:51.543345   61720 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0806 00:18:51.543406   61720 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0806 00:18:51.543611   61720 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0806 00:18:51.543688   61720 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0806 00:18:51.543889   61720 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0806 00:18:51.543985   61720 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0806 00:18:51.544254   61720 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0806 00:18:51.544347   61720 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0806 00:18:51.544608   61720 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0806 00:18:51.544621   61720 kubeadm.go:310] 
	I0806 00:18:51.544684   61720 kubeadm.go:310] 	Unfortunately, an error has occurred:
	I0806 00:18:51.544739   61720 kubeadm.go:310] 		timed out waiting for the condition
	I0806 00:18:51.544746   61720 kubeadm.go:310] 
	I0806 00:18:51.544786   61720 kubeadm.go:310] 	This error is likely caused by:
	I0806 00:18:51.544836   61720 kubeadm.go:310] 		- The kubelet is not running
	I0806 00:18:51.544967   61720 kubeadm.go:310] 		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I0806 00:18:51.544977   61720 kubeadm.go:310] 
	I0806 00:18:51.545097   61720 kubeadm.go:310] 	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I0806 00:18:51.545139   61720 kubeadm.go:310] 		- 'systemctl status kubelet'
	I0806 00:18:51.545186   61720 kubeadm.go:310] 		- 'journalctl -xeu kubelet'
	I0806 00:18:51.545196   61720 kubeadm.go:310] 
	I0806 00:18:51.545333   61720 kubeadm.go:310] 	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I0806 00:18:51.545447   61720 kubeadm.go:310] 	To troubleshoot, list all containers using your preferred container runtimes CLI.
	I0806 00:18:51.545458   61720 kubeadm.go:310] 
	I0806 00:18:51.545590   61720 kubeadm.go:310] 	Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
	I0806 00:18:51.545705   61720 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I0806 00:18:51.545818   61720 kubeadm.go:310] 		Once you have found the failing container, you can inspect its logs with:
	I0806 00:18:51.545888   61720 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	I0806 00:18:51.545939   61720 kubeadm.go:310] 
	I0806 00:18:51.545960   61720 kubeadm.go:394] duration metric: took 3m56.876568365s to StartCluster
	I0806 00:18:51.546018   61720 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0806 00:18:51.546076   61720 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0806 00:18:51.589115   61720 cri.go:89] found id: ""
	I0806 00:18:51.589146   61720 logs.go:276] 0 containers: []
	W0806 00:18:51.589157   61720 logs.go:278] No container was found matching "kube-apiserver"
	I0806 00:18:51.589164   61720 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0806 00:18:51.589233   61720 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0806 00:18:51.624648   61720 cri.go:89] found id: ""
	I0806 00:18:51.624689   61720 logs.go:276] 0 containers: []
	W0806 00:18:51.624701   61720 logs.go:278] No container was found matching "etcd"
	I0806 00:18:51.624708   61720 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0806 00:18:51.624771   61720 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0806 00:18:51.669194   61720 cri.go:89] found id: ""
	I0806 00:18:51.669221   61720 logs.go:276] 0 containers: []
	W0806 00:18:51.669229   61720 logs.go:278] No container was found matching "coredns"
	I0806 00:18:51.669235   61720 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0806 00:18:51.669295   61720 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0806 00:18:51.735759   61720 cri.go:89] found id: ""
	I0806 00:18:51.735785   61720 logs.go:276] 0 containers: []
	W0806 00:18:51.735795   61720 logs.go:278] No container was found matching "kube-scheduler"
	I0806 00:18:51.735803   61720 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0806 00:18:51.735869   61720 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0806 00:18:51.780479   61720 cri.go:89] found id: ""
	I0806 00:18:51.780512   61720 logs.go:276] 0 containers: []
	W0806 00:18:51.780524   61720 logs.go:278] No container was found matching "kube-proxy"
	I0806 00:18:51.780533   61720 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0806 00:18:51.780606   61720 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0806 00:18:51.822042   61720 cri.go:89] found id: ""
	I0806 00:18:51.822072   61720 logs.go:276] 0 containers: []
	W0806 00:18:51.822084   61720 logs.go:278] No container was found matching "kube-controller-manager"
	I0806 00:18:51.822091   61720 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0806 00:18:51.822171   61720 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0806 00:18:51.865563   61720 cri.go:89] found id: ""
	I0806 00:18:51.865606   61720 logs.go:276] 0 containers: []
	W0806 00:18:51.865631   61720 logs.go:278] No container was found matching "kindnet"
	I0806 00:18:51.865652   61720 logs.go:123] Gathering logs for kubelet ...
	I0806 00:18:51.865707   61720 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0806 00:18:51.924337   61720 logs.go:123] Gathering logs for dmesg ...
	I0806 00:18:51.924371   61720 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0806 00:18:51.941472   61720 logs.go:123] Gathering logs for describe nodes ...
	I0806 00:18:51.941498   61720 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0806 00:18:52.071496   61720 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0806 00:18:52.071525   61720 logs.go:123] Gathering logs for CRI-O ...
	I0806 00:18:52.071542   61720 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0806 00:18:52.182131   61720 logs.go:123] Gathering logs for container status ...
	I0806 00:18:52.182171   61720 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	W0806 00:18:52.228382   61720 out.go:364] Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	W0806 00:18:52.228438   61720 out.go:239] * 
	* 
	W0806 00:18:52.228518   61720 out.go:239] X Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	X Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W0806 00:18:52.228553   61720 out.go:239] * 
	* 
	W0806 00:18:52.229723   61720 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0806 00:18:52.233292   61720 out.go:177] 
	W0806 00:18:52.234630   61720 out.go:239] X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W0806 00:18:52.234670   61720 out.go:239] * Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	* Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	W0806 00:18:52.234688   61720 out.go:239] * Related issue: https://github.com/kubernetes/minikube/issues/4172
	* Related issue: https://github.com/kubernetes/minikube/issues/4172
	I0806 00:18:52.236266   61720 out.go:177] 

                                                
                                                
** /stderr **
version_upgrade_test.go:224: failed to start minikube HEAD with oldest k8s version: out/minikube-linux-amd64 start -p kubernetes-upgrade-907863 --memory=2200 --kubernetes-version=v1.20.0 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio: exit status 109
version_upgrade_test.go:227: (dbg) Run:  out/minikube-linux-amd64 stop -p kubernetes-upgrade-907863
version_upgrade_test.go:227: (dbg) Done: out/minikube-linux-amd64 stop -p kubernetes-upgrade-907863: (6.32837669s)
version_upgrade_test.go:232: (dbg) Run:  out/minikube-linux-amd64 -p kubernetes-upgrade-907863 status --format={{.Host}}
version_upgrade_test.go:232: (dbg) Non-zero exit: out/minikube-linux-amd64 -p kubernetes-upgrade-907863 status --format={{.Host}}: exit status 7 (66.5689ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
version_upgrade_test.go:234: status error: exit status 7 (may be ok)
version_upgrade_test.go:243: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-907863 --memory=2200 --kubernetes-version=v1.31.0-rc.0 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio
version_upgrade_test.go:243: (dbg) Done: out/minikube-linux-amd64 start -p kubernetes-upgrade-907863 --memory=2200 --kubernetes-version=v1.31.0-rc.0 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio: (41.317634882s)
version_upgrade_test.go:248: (dbg) Run:  kubectl --context kubernetes-upgrade-907863 version --output=json
version_upgrade_test.go:267: Attempting to downgrade Kubernetes (should fail)
version_upgrade_test.go:269: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-907863 --memory=2200 --kubernetes-version=v1.20.0 --driver=kvm2  --container-runtime=crio
version_upgrade_test.go:269: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p kubernetes-upgrade-907863 --memory=2200 --kubernetes-version=v1.20.0 --driver=kvm2  --container-runtime=crio: exit status 106 (85.741084ms)

                                                
                                                
-- stdout --
	* [kubernetes-upgrade-907863] minikube v1.33.1 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=19373
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/19373-9606/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/19373-9606/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to K8S_DOWNGRADE_UNSUPPORTED: Unable to safely downgrade existing Kubernetes v1.31.0-rc.0 cluster to v1.20.0
	* Suggestion: 
	
	    1) Recreate the cluster with Kubernetes 1.20.0, by running:
	    
	    minikube delete -p kubernetes-upgrade-907863
	    minikube start -p kubernetes-upgrade-907863 --kubernetes-version=v1.20.0
	    
	    2) Create a second cluster with Kubernetes 1.20.0, by running:
	    
	    minikube start -p kubernetes-upgrade-9078632 --kubernetes-version=v1.20.0
	    
	    3) Use the existing cluster at version Kubernetes 1.31.0-rc.0, by running:
	    
	    minikube start -p kubernetes-upgrade-907863 --kubernetes-version=v1.31.0-rc.0
	    

                                                
                                                
** /stderr **
version_upgrade_test.go:273: Attempting restart after unsuccessful downgrade
version_upgrade_test.go:275: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-907863 --memory=2200 --kubernetes-version=v1.31.0-rc.0 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio
version_upgrade_test.go:275: (dbg) Done: out/minikube-linux-amd64 start -p kubernetes-upgrade-907863 --memory=2200 --kubernetes-version=v1.31.0-rc.0 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio: (57.613628272s)
version_upgrade_test.go:279: *** TestKubernetesUpgrade FAILED at 2024-08-06 00:20:37.769126211 +0000 UTC m=+5575.181562729
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p kubernetes-upgrade-907863 -n kubernetes-upgrade-907863
helpers_test.go:244: <<< TestKubernetesUpgrade FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestKubernetesUpgrade]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p kubernetes-upgrade-907863 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p kubernetes-upgrade-907863 logs -n 25: (1.614575915s)
helpers_test.go:252: TestKubernetesUpgrade logs: 
-- stdout --
	
	==> Audit <==
	|---------|-------------------------------------------------------|---------------------------|---------|---------|---------------------|---------------------|
	| Command |                         Args                          |          Profile          |  User   | Version |     Start Time      |      End Time       |
	|---------|-------------------------------------------------------|---------------------------|---------|---------|---------------------|---------------------|
	| delete  | -p running-upgrade-863913                             | running-upgrade-863913    | jenkins | v1.33.1 | 06 Aug 24 00:14 UTC | 06 Aug 24 00:14 UTC |
	| start   | -p kubernetes-upgrade-907863                          | kubernetes-upgrade-907863 | jenkins | v1.33.1 | 06 Aug 24 00:14 UTC |                     |
	|         | --memory=2200                                         |                           |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0                          |                           |         |         |                     |                     |
	|         | --alsologtostderr                                     |                           |         |         |                     |                     |
	|         | -v=1 --driver=kvm2                                    |                           |         |         |                     |                     |
	|         | --container-runtime=crio                              |                           |         |         |                     |                     |
	| start   | -p pause-161508                                       | pause-161508              | jenkins | v1.33.1 | 06 Aug 24 00:14 UTC | 06 Aug 24 00:15 UTC |
	|         | --alsologtostderr                                     |                           |         |         |                     |                     |
	|         | -v=1 --driver=kvm2                                    |                           |         |         |                     |                     |
	|         | --container-runtime=crio                              |                           |         |         |                     |                     |
	| ssh     | cert-options-323157 ssh                               | cert-options-323157       | jenkins | v1.33.1 | 06 Aug 24 00:14 UTC | 06 Aug 24 00:14 UTC |
	|         | openssl x509 -text -noout -in                         |                           |         |         |                     |                     |
	|         | /var/lib/minikube/certs/apiserver.crt                 |                           |         |         |                     |                     |
	| ssh     | -p cert-options-323157 -- sudo                        | cert-options-323157       | jenkins | v1.33.1 | 06 Aug 24 00:14 UTC | 06 Aug 24 00:14 UTC |
	|         | cat /etc/kubernetes/admin.conf                        |                           |         |         |                     |                     |
	| delete  | -p cert-options-323157                                | cert-options-323157       | jenkins | v1.33.1 | 06 Aug 24 00:14 UTC | 06 Aug 24 00:14 UTC |
	| start   | -p stopped-upgrade-936666                             | minikube                  | jenkins | v1.26.0 | 06 Aug 24 00:14 UTC | 06 Aug 24 00:15 UTC |
	|         | --memory=2200 --vm-driver=kvm2                        |                           |         |         |                     |                     |
	|         |  --container-runtime=crio                             |                           |         |         |                     |                     |
	| delete  | -p pause-161508                                       | pause-161508              | jenkins | v1.33.1 | 06 Aug 24 00:15 UTC | 06 Aug 24 00:15 UTC |
	| start   | -p old-k8s-version-038991                             | old-k8s-version-038991    | jenkins | v1.33.1 | 06 Aug 24 00:15 UTC |                     |
	|         | --memory=2200                                         |                           |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                         |                           |         |         |                     |                     |
	|         | --kvm-network=default                                 |                           |         |         |                     |                     |
	|         | --kvm-qemu-uri=qemu:///system                         |                           |         |         |                     |                     |
	|         | --disable-driver-mounts                               |                           |         |         |                     |                     |
	|         | --keep-context=false                                  |                           |         |         |                     |                     |
	|         | --driver=kvm2                                         |                           |         |         |                     |                     |
	|         | --container-runtime=crio                              |                           |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0                          |                           |         |         |                     |                     |
	| stop    | stopped-upgrade-936666 stop                           | minikube                  | jenkins | v1.26.0 | 06 Aug 24 00:15 UTC | 06 Aug 24 00:15 UTC |
	| start   | -p stopped-upgrade-936666                             | stopped-upgrade-936666    | jenkins | v1.33.1 | 06 Aug 24 00:15 UTC | 06 Aug 24 00:16 UTC |
	|         | --memory=2200                                         |                           |         |         |                     |                     |
	|         | --alsologtostderr                                     |                           |         |         |                     |                     |
	|         | -v=1 --driver=kvm2                                    |                           |         |         |                     |                     |
	|         | --container-runtime=crio                              |                           |         |         |                     |                     |
	| start   | -p cert-expiration-272169                             | cert-expiration-272169    | jenkins | v1.33.1 | 06 Aug 24 00:15 UTC | 06 Aug 24 00:18 UTC |
	|         | --memory=2048                                         |                           |         |         |                     |                     |
	|         | --cert-expiration=8760h                               |                           |         |         |                     |                     |
	|         | --driver=kvm2                                         |                           |         |         |                     |                     |
	|         | --container-runtime=crio                              |                           |         |         |                     |                     |
	| delete  | -p stopped-upgrade-936666                             | stopped-upgrade-936666    | jenkins | v1.33.1 | 06 Aug 24 00:16 UTC | 06 Aug 24 00:16 UTC |
	| start   | -p no-preload-917038                                  | no-preload-917038         | jenkins | v1.33.1 | 06 Aug 24 00:16 UTC | 06 Aug 24 00:19 UTC |
	|         | --memory=2200 --alsologtostderr                       |                           |         |         |                     |                     |
	|         | --wait=true --preload=false                           |                           |         |         |                     |                     |
	|         | --driver=kvm2                                         |                           |         |         |                     |                     |
	|         | --container-runtime=crio                              |                           |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.0-rc.0                     |                           |         |         |                     |                     |
	| delete  | -p cert-expiration-272169                             | cert-expiration-272169    | jenkins | v1.33.1 | 06 Aug 24 00:18 UTC | 06 Aug 24 00:18 UTC |
	| start   | -p embed-certs-806751                                 | embed-certs-806751        | jenkins | v1.33.1 | 06 Aug 24 00:18 UTC | 06 Aug 24 00:19 UTC |
	|         | --memory=2200                                         |                           |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                         |                           |         |         |                     |                     |
	|         | --embed-certs --driver=kvm2                           |                           |         |         |                     |                     |
	|         |  --container-runtime=crio                             |                           |         |         |                     |                     |
	|         | --kubernetes-version=v1.30.3                          |                           |         |         |                     |                     |
	| stop    | -p kubernetes-upgrade-907863                          | kubernetes-upgrade-907863 | jenkins | v1.33.1 | 06 Aug 24 00:18 UTC | 06 Aug 24 00:18 UTC |
	| start   | -p kubernetes-upgrade-907863                          | kubernetes-upgrade-907863 | jenkins | v1.33.1 | 06 Aug 24 00:18 UTC | 06 Aug 24 00:19 UTC |
	|         | --memory=2200                                         |                           |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.0-rc.0                     |                           |         |         |                     |                     |
	|         | --alsologtostderr                                     |                           |         |         |                     |                     |
	|         | -v=1 --driver=kvm2                                    |                           |         |         |                     |                     |
	|         | --container-runtime=crio                              |                           |         |         |                     |                     |
	| start   | -p kubernetes-upgrade-907863                          | kubernetes-upgrade-907863 | jenkins | v1.33.1 | 06 Aug 24 00:19 UTC |                     |
	|         | --memory=2200                                         |                           |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0                          |                           |         |         |                     |                     |
	|         | --driver=kvm2                                         |                           |         |         |                     |                     |
	|         | --container-runtime=crio                              |                           |         |         |                     |                     |
	| start   | -p kubernetes-upgrade-907863                          | kubernetes-upgrade-907863 | jenkins | v1.33.1 | 06 Aug 24 00:19 UTC | 06 Aug 24 00:20 UTC |
	|         | --memory=2200                                         |                           |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.0-rc.0                     |                           |         |         |                     |                     |
	|         | --alsologtostderr                                     |                           |         |         |                     |                     |
	|         | -v=1 --driver=kvm2                                    |                           |         |         |                     |                     |
	|         | --container-runtime=crio                              |                           |         |         |                     |                     |
	| addons  | enable metrics-server -p embed-certs-806751           | embed-certs-806751        | jenkins | v1.33.1 | 06 Aug 24 00:19 UTC | 06 Aug 24 00:19 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4 |                           |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                |                           |         |         |                     |                     |
	| stop    | -p embed-certs-806751                                 | embed-certs-806751        | jenkins | v1.33.1 | 06 Aug 24 00:19 UTC |                     |
	|         | --alsologtostderr -v=3                                |                           |         |         |                     |                     |
	| addons  | enable metrics-server -p no-preload-917038            | no-preload-917038         | jenkins | v1.33.1 | 06 Aug 24 00:20 UTC | 06 Aug 24 00:20 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4 |                           |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                |                           |         |         |                     |                     |
	| stop    | -p no-preload-917038                                  | no-preload-917038         | jenkins | v1.33.1 | 06 Aug 24 00:20 UTC |                     |
	|         | --alsologtostderr -v=3                                |                           |         |         |                     |                     |
	| addons  | enable metrics-server -p old-k8s-version-038991       | old-k8s-version-038991    | jenkins | v1.33.1 | 06 Aug 24 00:20 UTC |                     |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4 |                           |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                |                           |         |         |                     |                     |
	|---------|-------------------------------------------------------|---------------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/08/06 00:19:40
	Running on machine: ubuntu-20-agent-10
	Binary: Built with gc go1.22.5 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0806 00:19:40.195896   65839 out.go:291] Setting OutFile to fd 1 ...
	I0806 00:19:40.196034   65839 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0806 00:19:40.196045   65839 out.go:304] Setting ErrFile to fd 2...
	I0806 00:19:40.196051   65839 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0806 00:19:40.196242   65839 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19373-9606/.minikube/bin
	I0806 00:19:40.196760   65839 out.go:298] Setting JSON to false
	I0806 00:19:40.198382   65839 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-10","uptime":7326,"bootTime":1722896254,"procs":212,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1065-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0806 00:19:40.198488   65839 start.go:139] virtualization: kvm guest
	I0806 00:19:40.200842   65839 out.go:177] * [kubernetes-upgrade-907863] minikube v1.33.1 on Ubuntu 20.04 (kvm/amd64)
	I0806 00:19:40.202117   65839 out.go:177]   - MINIKUBE_LOCATION=19373
	I0806 00:19:40.202123   65839 notify.go:220] Checking for updates...
	I0806 00:19:40.203321   65839 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0806 00:19:40.204563   65839 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19373-9606/kubeconfig
	I0806 00:19:40.206241   65839 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19373-9606/.minikube
	I0806 00:19:40.207639   65839 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0806 00:19:40.209049   65839 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0806 00:19:40.210901   65839 config.go:182] Loaded profile config "kubernetes-upgrade-907863": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0-rc.0
	I0806 00:19:40.211502   65839 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0806 00:19:40.211561   65839 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0806 00:19:40.228072   65839 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40107
	I0806 00:19:40.228453   65839 main.go:141] libmachine: () Calling .GetVersion
	I0806 00:19:40.229009   65839 main.go:141] libmachine: Using API Version  1
	I0806 00:19:40.229035   65839 main.go:141] libmachine: () Calling .SetConfigRaw
	I0806 00:19:40.229337   65839 main.go:141] libmachine: () Calling .GetMachineName
	I0806 00:19:40.229568   65839 main.go:141] libmachine: (kubernetes-upgrade-907863) Calling .DriverName
	I0806 00:19:40.229795   65839 driver.go:392] Setting default libvirt URI to qemu:///system
	I0806 00:19:40.230234   65839 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0806 00:19:40.230274   65839 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0806 00:19:40.245622   65839 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42617
	I0806 00:19:40.246065   65839 main.go:141] libmachine: () Calling .GetVersion
	I0806 00:19:40.246602   65839 main.go:141] libmachine: Using API Version  1
	I0806 00:19:40.246627   65839 main.go:141] libmachine: () Calling .SetConfigRaw
	I0806 00:19:40.246963   65839 main.go:141] libmachine: () Calling .GetMachineName
	I0806 00:19:40.247218   65839 main.go:141] libmachine: (kubernetes-upgrade-907863) Calling .DriverName
	I0806 00:19:40.284627   65839 out.go:177] * Using the kvm2 driver based on existing profile
	I0806 00:19:40.286162   65839 start.go:297] selected driver: kvm2
	I0806 00:19:40.286183   65839 start.go:901] validating driver "kvm2" against &{Name:kubernetes-upgrade-907863 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19339/minikube-v1.33.1-1722248113-19339-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfi
g:{KubernetesVersion:v1.31.0-rc.0 ClusterName:kubernetes-upgrade-907863 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.72.112 Port:8443 KubernetesVersion:v1.31.0-rc.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOpti
ons:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0806 00:19:40.286333   65839 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0806 00:19:40.287371   65839 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0806 00:19:40.287465   65839 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/19373-9606/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0806 00:19:40.303495   65839 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.33.1
	I0806 00:19:40.304064   65839 cni.go:84] Creating CNI manager for ""
	I0806 00:19:40.304085   65839 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0806 00:19:40.304126   65839 start.go:340] cluster config:
	{Name:kubernetes-upgrade-907863 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19339/minikube-v1.33.1-1722248113-19339-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.0-rc.0 ClusterName:kubernetes-upgrade-907863 Namesp
ace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.72.112 Port:8443 KubernetesVersion:v1.31.0-rc.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizat
ions:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0806 00:19:40.304237   65839 iso.go:125] acquiring lock: {Name:mk54a637ed625e04bb2b6adf973b61c976cd6d35 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0806 00:19:40.306783   65839 out.go:177] * Starting "kubernetes-upgrade-907863" primary control-plane node in "kubernetes-upgrade-907863" cluster
	I0806 00:19:40.307927   65839 preload.go:131] Checking if preload exists for k8s version v1.31.0-rc.0 and runtime crio
	I0806 00:19:40.307967   65839 preload.go:146] Found local preload: /home/jenkins/minikube-integration/19373-9606/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-rc.0-cri-o-overlay-amd64.tar.lz4
	I0806 00:19:40.307974   65839 cache.go:56] Caching tarball of preloaded images
	I0806 00:19:40.308072   65839 preload.go:172] Found /home/jenkins/minikube-integration/19373-9606/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-rc.0-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0806 00:19:40.308099   65839 cache.go:59] Finished verifying existence of preloaded tar for v1.31.0-rc.0 on crio
	I0806 00:19:40.308204   65839 profile.go:143] Saving config to /home/jenkins/minikube-integration/19373-9606/.minikube/profiles/kubernetes-upgrade-907863/config.json ...
	I0806 00:19:40.308393   65839 start.go:360] acquireMachinesLock for kubernetes-upgrade-907863: {Name:mkd2ba511c39504598222edbf83078b718329186 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0806 00:19:40.308438   65839 start.go:364] duration metric: took 27.795µs to acquireMachinesLock for "kubernetes-upgrade-907863"
	I0806 00:19:40.308451   65839 start.go:96] Skipping create...Using existing machine configuration
	I0806 00:19:40.308457   65839 fix.go:54] fixHost starting: 
	I0806 00:19:40.308741   65839 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0806 00:19:40.308770   65839 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0806 00:19:40.323858   65839 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37499
	I0806 00:19:40.324402   65839 main.go:141] libmachine: () Calling .GetVersion
	I0806 00:19:40.325002   65839 main.go:141] libmachine: Using API Version  1
	I0806 00:19:40.325029   65839 main.go:141] libmachine: () Calling .SetConfigRaw
	I0806 00:19:40.325323   65839 main.go:141] libmachine: () Calling .GetMachineName
	I0806 00:19:40.325581   65839 main.go:141] libmachine: (kubernetes-upgrade-907863) Calling .DriverName
	I0806 00:19:40.325728   65839 main.go:141] libmachine: (kubernetes-upgrade-907863) Calling .GetState
	I0806 00:19:40.327760   65839 fix.go:112] recreateIfNeeded on kubernetes-upgrade-907863: state=Running err=<nil>
	W0806 00:19:40.327783   65839 fix.go:138] unexpected machine state, will restart: <nil>
	I0806 00:19:40.329751   65839 out.go:177] * Updating the running kvm2 "kubernetes-upgrade-907863" VM ...
	I0806 00:19:38.098645   63806 pod_ready.go:102] pod "coredns-6f6b679f8f-n78kz" in "kube-system" namespace has status "Ready":"False"
	I0806 00:19:40.103023   63806 pod_ready.go:102] pod "coredns-6f6b679f8f-n78kz" in "kube-system" namespace has status "Ready":"False"
	I0806 00:19:40.331031   65839 machine.go:94] provisionDockerMachine start ...
	I0806 00:19:40.331086   65839 main.go:141] libmachine: (kubernetes-upgrade-907863) Calling .DriverName
	I0806 00:19:40.331319   65839 main.go:141] libmachine: (kubernetes-upgrade-907863) Calling .GetSSHHostname
	I0806 00:19:40.334430   65839 main.go:141] libmachine: (kubernetes-upgrade-907863) DBG | domain kubernetes-upgrade-907863 has defined MAC address 52:54:00:f6:6f:99 in network mk-kubernetes-upgrade-907863
	I0806 00:19:40.335038   65839 main.go:141] libmachine: (kubernetes-upgrade-907863) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f6:6f:99", ip: ""} in network mk-kubernetes-upgrade-907863: {Iface:virbr4 ExpiryTime:2024-08-06 01:19:10 +0000 UTC Type:0 Mac:52:54:00:f6:6f:99 Iaid: IPaddr:192.168.72.112 Prefix:24 Hostname:kubernetes-upgrade-907863 Clientid:01:52:54:00:f6:6f:99}
	I0806 00:19:40.335085   65839 main.go:141] libmachine: (kubernetes-upgrade-907863) DBG | domain kubernetes-upgrade-907863 has defined IP address 192.168.72.112 and MAC address 52:54:00:f6:6f:99 in network mk-kubernetes-upgrade-907863
	I0806 00:19:40.335262   65839 main.go:141] libmachine: (kubernetes-upgrade-907863) Calling .GetSSHPort
	I0806 00:19:40.335443   65839 main.go:141] libmachine: (kubernetes-upgrade-907863) Calling .GetSSHKeyPath
	I0806 00:19:40.335615   65839 main.go:141] libmachine: (kubernetes-upgrade-907863) Calling .GetSSHKeyPath
	I0806 00:19:40.335878   65839 main.go:141] libmachine: (kubernetes-upgrade-907863) Calling .GetSSHUsername
	I0806 00:19:40.336046   65839 main.go:141] libmachine: Using SSH client type: native
	I0806 00:19:40.336282   65839 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.72.112 22 <nil> <nil>}
	I0806 00:19:40.336294   65839 main.go:141] libmachine: About to run SSH command:
	hostname
	I0806 00:19:40.468340   65839 main.go:141] libmachine: SSH cmd err, output: <nil>: kubernetes-upgrade-907863
	
	I0806 00:19:40.468375   65839 main.go:141] libmachine: (kubernetes-upgrade-907863) Calling .GetMachineName
	I0806 00:19:40.468639   65839 buildroot.go:166] provisioning hostname "kubernetes-upgrade-907863"
	I0806 00:19:40.468668   65839 main.go:141] libmachine: (kubernetes-upgrade-907863) Calling .GetMachineName
	I0806 00:19:40.468966   65839 main.go:141] libmachine: (kubernetes-upgrade-907863) Calling .GetSSHHostname
	I0806 00:19:40.471853   65839 main.go:141] libmachine: (kubernetes-upgrade-907863) DBG | domain kubernetes-upgrade-907863 has defined MAC address 52:54:00:f6:6f:99 in network mk-kubernetes-upgrade-907863
	I0806 00:19:40.472290   65839 main.go:141] libmachine: (kubernetes-upgrade-907863) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f6:6f:99", ip: ""} in network mk-kubernetes-upgrade-907863: {Iface:virbr4 ExpiryTime:2024-08-06 01:19:10 +0000 UTC Type:0 Mac:52:54:00:f6:6f:99 Iaid: IPaddr:192.168.72.112 Prefix:24 Hostname:kubernetes-upgrade-907863 Clientid:01:52:54:00:f6:6f:99}
	I0806 00:19:40.472333   65839 main.go:141] libmachine: (kubernetes-upgrade-907863) DBG | domain kubernetes-upgrade-907863 has defined IP address 192.168.72.112 and MAC address 52:54:00:f6:6f:99 in network mk-kubernetes-upgrade-907863
	I0806 00:19:40.472481   65839 main.go:141] libmachine: (kubernetes-upgrade-907863) Calling .GetSSHPort
	I0806 00:19:40.472655   65839 main.go:141] libmachine: (kubernetes-upgrade-907863) Calling .GetSSHKeyPath
	I0806 00:19:40.472857   65839 main.go:141] libmachine: (kubernetes-upgrade-907863) Calling .GetSSHKeyPath
	I0806 00:19:40.473007   65839 main.go:141] libmachine: (kubernetes-upgrade-907863) Calling .GetSSHUsername
	I0806 00:19:40.473195   65839 main.go:141] libmachine: Using SSH client type: native
	I0806 00:19:40.473429   65839 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.72.112 22 <nil> <nil>}
	I0806 00:19:40.473449   65839 main.go:141] libmachine: About to run SSH command:
	sudo hostname kubernetes-upgrade-907863 && echo "kubernetes-upgrade-907863" | sudo tee /etc/hostname
	I0806 00:19:40.600722   65839 main.go:141] libmachine: SSH cmd err, output: <nil>: kubernetes-upgrade-907863
	
	I0806 00:19:40.600764   65839 main.go:141] libmachine: (kubernetes-upgrade-907863) Calling .GetSSHHostname
	I0806 00:19:40.603626   65839 main.go:141] libmachine: (kubernetes-upgrade-907863) DBG | domain kubernetes-upgrade-907863 has defined MAC address 52:54:00:f6:6f:99 in network mk-kubernetes-upgrade-907863
	I0806 00:19:40.603991   65839 main.go:141] libmachine: (kubernetes-upgrade-907863) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f6:6f:99", ip: ""} in network mk-kubernetes-upgrade-907863: {Iface:virbr4 ExpiryTime:2024-08-06 01:19:10 +0000 UTC Type:0 Mac:52:54:00:f6:6f:99 Iaid: IPaddr:192.168.72.112 Prefix:24 Hostname:kubernetes-upgrade-907863 Clientid:01:52:54:00:f6:6f:99}
	I0806 00:19:40.604026   65839 main.go:141] libmachine: (kubernetes-upgrade-907863) DBG | domain kubernetes-upgrade-907863 has defined IP address 192.168.72.112 and MAC address 52:54:00:f6:6f:99 in network mk-kubernetes-upgrade-907863
	I0806 00:19:40.604191   65839 main.go:141] libmachine: (kubernetes-upgrade-907863) Calling .GetSSHPort
	I0806 00:19:40.604388   65839 main.go:141] libmachine: (kubernetes-upgrade-907863) Calling .GetSSHKeyPath
	I0806 00:19:40.604544   65839 main.go:141] libmachine: (kubernetes-upgrade-907863) Calling .GetSSHKeyPath
	I0806 00:19:40.604691   65839 main.go:141] libmachine: (kubernetes-upgrade-907863) Calling .GetSSHUsername
	I0806 00:19:40.604852   65839 main.go:141] libmachine: Using SSH client type: native
	I0806 00:19:40.605042   65839 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.72.112 22 <nil> <nil>}
	I0806 00:19:40.605061   65839 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\skubernetes-upgrade-907863' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 kubernetes-upgrade-907863/g' /etc/hosts;
				else 
					echo '127.0.1.1 kubernetes-upgrade-907863' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0806 00:19:40.711990   65839 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0806 00:19:40.712019   65839 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19373-9606/.minikube CaCertPath:/home/jenkins/minikube-integration/19373-9606/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19373-9606/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19373-9606/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19373-9606/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19373-9606/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19373-9606/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19373-9606/.minikube}
	I0806 00:19:40.712078   65839 buildroot.go:174] setting up certificates
	I0806 00:19:40.712091   65839 provision.go:84] configureAuth start
	I0806 00:19:40.712108   65839 main.go:141] libmachine: (kubernetes-upgrade-907863) Calling .GetMachineName
	I0806 00:19:40.712373   65839 main.go:141] libmachine: (kubernetes-upgrade-907863) Calling .GetIP
	I0806 00:19:40.715029   65839 main.go:141] libmachine: (kubernetes-upgrade-907863) DBG | domain kubernetes-upgrade-907863 has defined MAC address 52:54:00:f6:6f:99 in network mk-kubernetes-upgrade-907863
	I0806 00:19:40.715425   65839 main.go:141] libmachine: (kubernetes-upgrade-907863) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f6:6f:99", ip: ""} in network mk-kubernetes-upgrade-907863: {Iface:virbr4 ExpiryTime:2024-08-06 01:19:10 +0000 UTC Type:0 Mac:52:54:00:f6:6f:99 Iaid: IPaddr:192.168.72.112 Prefix:24 Hostname:kubernetes-upgrade-907863 Clientid:01:52:54:00:f6:6f:99}
	I0806 00:19:40.715463   65839 main.go:141] libmachine: (kubernetes-upgrade-907863) DBG | domain kubernetes-upgrade-907863 has defined IP address 192.168.72.112 and MAC address 52:54:00:f6:6f:99 in network mk-kubernetes-upgrade-907863
	I0806 00:19:40.715620   65839 main.go:141] libmachine: (kubernetes-upgrade-907863) Calling .GetSSHHostname
	I0806 00:19:40.718147   65839 main.go:141] libmachine: (kubernetes-upgrade-907863) DBG | domain kubernetes-upgrade-907863 has defined MAC address 52:54:00:f6:6f:99 in network mk-kubernetes-upgrade-907863
	I0806 00:19:40.718504   65839 main.go:141] libmachine: (kubernetes-upgrade-907863) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f6:6f:99", ip: ""} in network mk-kubernetes-upgrade-907863: {Iface:virbr4 ExpiryTime:2024-08-06 01:19:10 +0000 UTC Type:0 Mac:52:54:00:f6:6f:99 Iaid: IPaddr:192.168.72.112 Prefix:24 Hostname:kubernetes-upgrade-907863 Clientid:01:52:54:00:f6:6f:99}
	I0806 00:19:40.718532   65839 main.go:141] libmachine: (kubernetes-upgrade-907863) DBG | domain kubernetes-upgrade-907863 has defined IP address 192.168.72.112 and MAC address 52:54:00:f6:6f:99 in network mk-kubernetes-upgrade-907863
	I0806 00:19:40.718714   65839 provision.go:143] copyHostCerts
	I0806 00:19:40.718794   65839 exec_runner.go:144] found /home/jenkins/minikube-integration/19373-9606/.minikube/ca.pem, removing ...
	I0806 00:19:40.718808   65839 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19373-9606/.minikube/ca.pem
	I0806 00:19:40.718895   65839 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19373-9606/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19373-9606/.minikube/ca.pem (1082 bytes)
	I0806 00:19:40.719025   65839 exec_runner.go:144] found /home/jenkins/minikube-integration/19373-9606/.minikube/cert.pem, removing ...
	I0806 00:19:40.719038   65839 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19373-9606/.minikube/cert.pem
	I0806 00:19:40.719091   65839 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19373-9606/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19373-9606/.minikube/cert.pem (1123 bytes)
	I0806 00:19:40.719254   65839 exec_runner.go:144] found /home/jenkins/minikube-integration/19373-9606/.minikube/key.pem, removing ...
	I0806 00:19:40.719267   65839 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19373-9606/.minikube/key.pem
	I0806 00:19:40.719290   65839 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19373-9606/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19373-9606/.minikube/key.pem (1679 bytes)
	I0806 00:19:40.719359   65839 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19373-9606/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19373-9606/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19373-9606/.minikube/certs/ca-key.pem org=jenkins.kubernetes-upgrade-907863 san=[127.0.0.1 192.168.72.112 kubernetes-upgrade-907863 localhost minikube]
	I0806 00:19:40.891860   65839 provision.go:177] copyRemoteCerts
	I0806 00:19:40.891919   65839 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0806 00:19:40.891942   65839 main.go:141] libmachine: (kubernetes-upgrade-907863) Calling .GetSSHHostname
	I0806 00:19:40.894930   65839 main.go:141] libmachine: (kubernetes-upgrade-907863) DBG | domain kubernetes-upgrade-907863 has defined MAC address 52:54:00:f6:6f:99 in network mk-kubernetes-upgrade-907863
	I0806 00:19:40.895307   65839 main.go:141] libmachine: (kubernetes-upgrade-907863) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f6:6f:99", ip: ""} in network mk-kubernetes-upgrade-907863: {Iface:virbr4 ExpiryTime:2024-08-06 01:19:10 +0000 UTC Type:0 Mac:52:54:00:f6:6f:99 Iaid: IPaddr:192.168.72.112 Prefix:24 Hostname:kubernetes-upgrade-907863 Clientid:01:52:54:00:f6:6f:99}
	I0806 00:19:40.895340   65839 main.go:141] libmachine: (kubernetes-upgrade-907863) DBG | domain kubernetes-upgrade-907863 has defined IP address 192.168.72.112 and MAC address 52:54:00:f6:6f:99 in network mk-kubernetes-upgrade-907863
	I0806 00:19:40.895520   65839 main.go:141] libmachine: (kubernetes-upgrade-907863) Calling .GetSSHPort
	I0806 00:19:40.895731   65839 main.go:141] libmachine: (kubernetes-upgrade-907863) Calling .GetSSHKeyPath
	I0806 00:19:40.895875   65839 main.go:141] libmachine: (kubernetes-upgrade-907863) Calling .GetSSHUsername
	I0806 00:19:40.896007   65839 sshutil.go:53] new ssh client: &{IP:192.168.72.112 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19373-9606/.minikube/machines/kubernetes-upgrade-907863/id_rsa Username:docker}
	I0806 00:19:40.982044   65839 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19373-9606/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0806 00:19:41.011760   65839 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19373-9606/.minikube/machines/server.pem --> /etc/docker/server.pem (1241 bytes)
	I0806 00:19:41.038479   65839 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19373-9606/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0806 00:19:41.070525   65839 provision.go:87] duration metric: took 358.419054ms to configureAuth
	I0806 00:19:41.070553   65839 buildroot.go:189] setting minikube options for container-runtime
	I0806 00:19:41.070771   65839 config.go:182] Loaded profile config "kubernetes-upgrade-907863": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0-rc.0
	I0806 00:19:41.070857   65839 main.go:141] libmachine: (kubernetes-upgrade-907863) Calling .GetSSHHostname
	I0806 00:19:41.073779   65839 main.go:141] libmachine: (kubernetes-upgrade-907863) DBG | domain kubernetes-upgrade-907863 has defined MAC address 52:54:00:f6:6f:99 in network mk-kubernetes-upgrade-907863
	I0806 00:19:41.074161   65839 main.go:141] libmachine: (kubernetes-upgrade-907863) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f6:6f:99", ip: ""} in network mk-kubernetes-upgrade-907863: {Iface:virbr4 ExpiryTime:2024-08-06 01:19:10 +0000 UTC Type:0 Mac:52:54:00:f6:6f:99 Iaid: IPaddr:192.168.72.112 Prefix:24 Hostname:kubernetes-upgrade-907863 Clientid:01:52:54:00:f6:6f:99}
	I0806 00:19:41.074216   65839 main.go:141] libmachine: (kubernetes-upgrade-907863) DBG | domain kubernetes-upgrade-907863 has defined IP address 192.168.72.112 and MAC address 52:54:00:f6:6f:99 in network mk-kubernetes-upgrade-907863
	I0806 00:19:41.074390   65839 main.go:141] libmachine: (kubernetes-upgrade-907863) Calling .GetSSHPort
	I0806 00:19:41.074610   65839 main.go:141] libmachine: (kubernetes-upgrade-907863) Calling .GetSSHKeyPath
	I0806 00:19:41.074806   65839 main.go:141] libmachine: (kubernetes-upgrade-907863) Calling .GetSSHKeyPath
	I0806 00:19:41.074961   65839 main.go:141] libmachine: (kubernetes-upgrade-907863) Calling .GetSSHUsername
	I0806 00:19:41.075156   65839 main.go:141] libmachine: Using SSH client type: native
	I0806 00:19:41.075363   65839 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.72.112 22 <nil> <nil>}
	I0806 00:19:41.075379   65839 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0806 00:19:42.599860   63806 pod_ready.go:102] pod "coredns-6f6b679f8f-n78kz" in "kube-system" namespace has status "Ready":"False"
	I0806 00:19:45.098037   63806 pod_ready.go:102] pod "coredns-6f6b679f8f-n78kz" in "kube-system" namespace has status "Ready":"False"
	I0806 00:19:47.098069   63806 pod_ready.go:102] pod "coredns-6f6b679f8f-n78kz" in "kube-system" namespace has status "Ready":"False"
	I0806 00:19:49.099093   63806 pod_ready.go:102] pod "coredns-6f6b679f8f-n78kz" in "kube-system" namespace has status "Ready":"False"
	I0806 00:19:51.598515   63806 pod_ready.go:92] pod "coredns-6f6b679f8f-n78kz" in "kube-system" namespace has status "Ready":"True"
	I0806 00:19:51.598540   63806 pod_ready.go:81] duration metric: took 39.506987625s for pod "coredns-6f6b679f8f-n78kz" in "kube-system" namespace to be "Ready" ...
	I0806 00:19:51.598552   63806 pod_ready.go:78] waiting up to 6m0s for pod "etcd-no-preload-917038" in "kube-system" namespace to be "Ready" ...
	I0806 00:19:51.603636   63806 pod_ready.go:92] pod "etcd-no-preload-917038" in "kube-system" namespace has status "Ready":"True"
	I0806 00:19:51.603657   63806 pod_ready.go:81] duration metric: took 5.098258ms for pod "etcd-no-preload-917038" in "kube-system" namespace to be "Ready" ...
	I0806 00:19:51.603669   63806 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-no-preload-917038" in "kube-system" namespace to be "Ready" ...
	I0806 00:19:51.608945   63806 pod_ready.go:92] pod "kube-apiserver-no-preload-917038" in "kube-system" namespace has status "Ready":"True"
	I0806 00:19:51.608966   63806 pod_ready.go:81] duration metric: took 5.287456ms for pod "kube-apiserver-no-preload-917038" in "kube-system" namespace to be "Ready" ...
	I0806 00:19:51.608980   63806 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-no-preload-917038" in "kube-system" namespace to be "Ready" ...
	I0806 00:19:51.613002   63806 pod_ready.go:92] pod "kube-controller-manager-no-preload-917038" in "kube-system" namespace has status "Ready":"True"
	I0806 00:19:51.613021   63806 pod_ready.go:81] duration metric: took 4.033297ms for pod "kube-controller-manager-no-preload-917038" in "kube-system" namespace to be "Ready" ...
	I0806 00:19:51.613032   63806 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-9pknx" in "kube-system" namespace to be "Ready" ...
	I0806 00:19:51.617900   63806 pod_ready.go:92] pod "kube-proxy-9pknx" in "kube-system" namespace has status "Ready":"True"
	I0806 00:19:51.617918   63806 pod_ready.go:81] duration metric: took 4.878567ms for pod "kube-proxy-9pknx" in "kube-system" namespace to be "Ready" ...
	I0806 00:19:51.617929   63806 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-no-preload-917038" in "kube-system" namespace to be "Ready" ...
	I0806 00:19:51.996145   63806 pod_ready.go:92] pod "kube-scheduler-no-preload-917038" in "kube-system" namespace has status "Ready":"True"
	I0806 00:19:51.996175   63806 pod_ready.go:81] duration metric: took 378.238202ms for pod "kube-scheduler-no-preload-917038" in "kube-system" namespace to be "Ready" ...
	I0806 00:19:51.996186   63806 pod_ready.go:38] duration metric: took 39.943210389s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0806 00:19:51.996207   63806 api_server.go:52] waiting for apiserver process to appear ...
	I0806 00:19:51.996273   63806 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0806 00:19:52.012593   63806 api_server.go:72] duration metric: took 40.781994281s to wait for apiserver process to appear ...
	I0806 00:19:52.012617   63806 api_server.go:88] waiting for apiserver healthz status ...
	I0806 00:19:52.012640   63806 api_server.go:253] Checking apiserver healthz at https://192.168.61.12:8443/healthz ...
	I0806 00:19:52.017191   63806 api_server.go:279] https://192.168.61.12:8443/healthz returned 200:
	ok
	I0806 00:19:52.018320   63806 api_server.go:141] control plane version: v1.31.0-rc.0
	I0806 00:19:52.018343   63806 api_server.go:131] duration metric: took 5.719107ms to wait for apiserver health ...
	I0806 00:19:52.018350   63806 system_pods.go:43] waiting for kube-system pods to appear ...
	I0806 00:19:52.203158   63806 system_pods.go:59] 8 kube-system pods found
	I0806 00:19:52.203198   63806 system_pods.go:61] "coredns-6f6b679f8f-n78kz" [7b87a265-bc8a-4286-93b8-be63e36f5dcb] Running
	I0806 00:19:52.203205   63806 system_pods.go:61] "coredns-6f6b679f8f-s29hx" [f4070aed-11e9-47ea-a315-419b2ad54f74] Running
	I0806 00:19:52.203209   63806 system_pods.go:61] "etcd-no-preload-917038" [dca1f4a7-1645-4d3f-9ff5-9222caf3cea9] Running
	I0806 00:19:52.203215   63806 system_pods.go:61] "kube-apiserver-no-preload-917038" [676c9940-739a-4a9c-8f10-ed07fb58e83b] Running
	I0806 00:19:52.203220   63806 system_pods.go:61] "kube-controller-manager-no-preload-917038" [beddecab-f334-4ee7-b956-d503add2b24b] Running
	I0806 00:19:52.203224   63806 system_pods.go:61] "kube-proxy-9pknx" [e6285e62-0d18-4273-8f9e-95f994db6bf3] Running
	I0806 00:19:52.203229   63806 system_pods.go:61] "kube-scheduler-no-preload-917038" [552c0db6-fc57-4476-9a8f-46e055ac7246] Running
	I0806 00:19:52.203233   63806 system_pods.go:61] "storage-provisioner" [6d6c24f1-cbe6-46cc-be3a-9de7c792f5b9] Running
	I0806 00:19:52.203240   63806 system_pods.go:74] duration metric: took 184.883171ms to wait for pod list to return data ...
	I0806 00:19:52.203249   63806 default_sa.go:34] waiting for default service account to be created ...
	I0806 00:19:52.395944   63806 default_sa.go:45] found service account: "default"
	I0806 00:19:52.395971   63806 default_sa.go:55] duration metric: took 192.715737ms for default service account to be created ...
	I0806 00:19:52.395980   63806 system_pods.go:116] waiting for k8s-apps to be running ...
	I0806 00:19:52.599275   63806 system_pods.go:86] 8 kube-system pods found
	I0806 00:19:52.599303   63806 system_pods.go:89] "coredns-6f6b679f8f-n78kz" [7b87a265-bc8a-4286-93b8-be63e36f5dcb] Running
	I0806 00:19:52.599309   63806 system_pods.go:89] "coredns-6f6b679f8f-s29hx" [f4070aed-11e9-47ea-a315-419b2ad54f74] Running
	I0806 00:19:52.599315   63806 system_pods.go:89] "etcd-no-preload-917038" [dca1f4a7-1645-4d3f-9ff5-9222caf3cea9] Running
	I0806 00:19:52.599320   63806 system_pods.go:89] "kube-apiserver-no-preload-917038" [676c9940-739a-4a9c-8f10-ed07fb58e83b] Running
	I0806 00:19:52.599324   63806 system_pods.go:89] "kube-controller-manager-no-preload-917038" [beddecab-f334-4ee7-b956-d503add2b24b] Running
	I0806 00:19:52.599328   63806 system_pods.go:89] "kube-proxy-9pknx" [e6285e62-0d18-4273-8f9e-95f994db6bf3] Running
	I0806 00:19:52.599334   63806 system_pods.go:89] "kube-scheduler-no-preload-917038" [552c0db6-fc57-4476-9a8f-46e055ac7246] Running
	I0806 00:19:52.599338   63806 system_pods.go:89] "storage-provisioner" [6d6c24f1-cbe6-46cc-be3a-9de7c792f5b9] Running
	I0806 00:19:52.599347   63806 system_pods.go:126] duration metric: took 203.361167ms to wait for k8s-apps to be running ...
	I0806 00:19:52.599356   63806 system_svc.go:44] waiting for kubelet service to be running ....
	I0806 00:19:52.599403   63806 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0806 00:19:52.618856   63806 system_svc.go:56] duration metric: took 19.489411ms WaitForService to wait for kubelet
	I0806 00:19:52.618892   63806 kubeadm.go:582] duration metric: took 41.3882955s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0806 00:19:52.618918   63806 node_conditions.go:102] verifying NodePressure condition ...
	I0806 00:19:52.796119   63806 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0806 00:19:52.796154   63806 node_conditions.go:123] node cpu capacity is 2
	I0806 00:19:52.796170   63806 node_conditions.go:105] duration metric: took 177.245923ms to run NodePressure ...
	I0806 00:19:52.796185   63806 start.go:241] waiting for startup goroutines ...
	I0806 00:19:52.796196   63806 start.go:246] waiting for cluster config update ...
	I0806 00:19:52.796209   63806 start.go:255] writing updated cluster config ...
	I0806 00:19:52.796557   63806 ssh_runner.go:195] Run: rm -f paused
	I0806 00:19:52.856562   63806 start.go:600] kubectl: 1.30.3, cluster: 1.31.0-rc.0 (minor skew: 1)
	I0806 00:19:52.857990   63806 out.go:177] * Done! kubectl is now configured to use "no-preload-917038" cluster and "default" namespace by default
	I0806 00:19:50.881088   65839 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0806 00:19:50.881116   65839 machine.go:97] duration metric: took 10.550071594s to provisionDockerMachine
	I0806 00:19:50.881127   65839 start.go:293] postStartSetup for "kubernetes-upgrade-907863" (driver="kvm2")
	I0806 00:19:50.881138   65839 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0806 00:19:50.881152   65839 main.go:141] libmachine: (kubernetes-upgrade-907863) Calling .DriverName
	I0806 00:19:50.881540   65839 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0806 00:19:50.881582   65839 main.go:141] libmachine: (kubernetes-upgrade-907863) Calling .GetSSHHostname
	I0806 00:19:50.884450   65839 main.go:141] libmachine: (kubernetes-upgrade-907863) DBG | domain kubernetes-upgrade-907863 has defined MAC address 52:54:00:f6:6f:99 in network mk-kubernetes-upgrade-907863
	I0806 00:19:50.884842   65839 main.go:141] libmachine: (kubernetes-upgrade-907863) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f6:6f:99", ip: ""} in network mk-kubernetes-upgrade-907863: {Iface:virbr4 ExpiryTime:2024-08-06 01:19:10 +0000 UTC Type:0 Mac:52:54:00:f6:6f:99 Iaid: IPaddr:192.168.72.112 Prefix:24 Hostname:kubernetes-upgrade-907863 Clientid:01:52:54:00:f6:6f:99}
	I0806 00:19:50.884874   65839 main.go:141] libmachine: (kubernetes-upgrade-907863) DBG | domain kubernetes-upgrade-907863 has defined IP address 192.168.72.112 and MAC address 52:54:00:f6:6f:99 in network mk-kubernetes-upgrade-907863
	I0806 00:19:50.884987   65839 main.go:141] libmachine: (kubernetes-upgrade-907863) Calling .GetSSHPort
	I0806 00:19:50.885201   65839 main.go:141] libmachine: (kubernetes-upgrade-907863) Calling .GetSSHKeyPath
	I0806 00:19:50.885376   65839 main.go:141] libmachine: (kubernetes-upgrade-907863) Calling .GetSSHUsername
	I0806 00:19:50.885515   65839 sshutil.go:53] new ssh client: &{IP:192.168.72.112 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19373-9606/.minikube/machines/kubernetes-upgrade-907863/id_rsa Username:docker}
	I0806 00:19:50.969410   65839 ssh_runner.go:195] Run: cat /etc/os-release
	I0806 00:19:50.974075   65839 info.go:137] Remote host: Buildroot 2023.02.9
	I0806 00:19:50.974108   65839 filesync.go:126] Scanning /home/jenkins/minikube-integration/19373-9606/.minikube/addons for local assets ...
	I0806 00:19:50.974177   65839 filesync.go:126] Scanning /home/jenkins/minikube-integration/19373-9606/.minikube/files for local assets ...
	I0806 00:19:50.974269   65839 filesync.go:149] local asset: /home/jenkins/minikube-integration/19373-9606/.minikube/files/etc/ssl/certs/167922.pem -> 167922.pem in /etc/ssl/certs
	I0806 00:19:50.974384   65839 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0806 00:19:50.984615   65839 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19373-9606/.minikube/files/etc/ssl/certs/167922.pem --> /etc/ssl/certs/167922.pem (1708 bytes)
	I0806 00:19:51.011522   65839 start.go:296] duration metric: took 130.382023ms for postStartSetup
	I0806 00:19:51.011566   65839 fix.go:56] duration metric: took 10.703107859s for fixHost
	I0806 00:19:51.011589   65839 main.go:141] libmachine: (kubernetes-upgrade-907863) Calling .GetSSHHostname
	I0806 00:19:51.014371   65839 main.go:141] libmachine: (kubernetes-upgrade-907863) DBG | domain kubernetes-upgrade-907863 has defined MAC address 52:54:00:f6:6f:99 in network mk-kubernetes-upgrade-907863
	I0806 00:19:51.014712   65839 main.go:141] libmachine: (kubernetes-upgrade-907863) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f6:6f:99", ip: ""} in network mk-kubernetes-upgrade-907863: {Iface:virbr4 ExpiryTime:2024-08-06 01:19:10 +0000 UTC Type:0 Mac:52:54:00:f6:6f:99 Iaid: IPaddr:192.168.72.112 Prefix:24 Hostname:kubernetes-upgrade-907863 Clientid:01:52:54:00:f6:6f:99}
	I0806 00:19:51.014766   65839 main.go:141] libmachine: (kubernetes-upgrade-907863) DBG | domain kubernetes-upgrade-907863 has defined IP address 192.168.72.112 and MAC address 52:54:00:f6:6f:99 in network mk-kubernetes-upgrade-907863
	I0806 00:19:51.014903   65839 main.go:141] libmachine: (kubernetes-upgrade-907863) Calling .GetSSHPort
	I0806 00:19:51.015136   65839 main.go:141] libmachine: (kubernetes-upgrade-907863) Calling .GetSSHKeyPath
	I0806 00:19:51.015319   65839 main.go:141] libmachine: (kubernetes-upgrade-907863) Calling .GetSSHKeyPath
	I0806 00:19:51.015483   65839 main.go:141] libmachine: (kubernetes-upgrade-907863) Calling .GetSSHUsername
	I0806 00:19:51.015661   65839 main.go:141] libmachine: Using SSH client type: native
	I0806 00:19:51.015809   65839 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.72.112 22 <nil> <nil>}
	I0806 00:19:51.015819   65839 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0806 00:19:51.116052   65839 main.go:141] libmachine: SSH cmd err, output: <nil>: 1722903591.113313483
	
	I0806 00:19:51.116074   65839 fix.go:216] guest clock: 1722903591.113313483
	I0806 00:19:51.116093   65839 fix.go:229] Guest: 2024-08-06 00:19:51.113313483 +0000 UTC Remote: 2024-08-06 00:19:51.011570985 +0000 UTC m=+10.852940548 (delta=101.742498ms)
	I0806 00:19:51.116120   65839 fix.go:200] guest clock delta is within tolerance: 101.742498ms
	I0806 00:19:51.116134   65839 start.go:83] releasing machines lock for "kubernetes-upgrade-907863", held for 10.807687409s
	I0806 00:19:51.116165   65839 main.go:141] libmachine: (kubernetes-upgrade-907863) Calling .DriverName
	I0806 00:19:51.116434   65839 main.go:141] libmachine: (kubernetes-upgrade-907863) Calling .GetIP
	I0806 00:19:51.119073   65839 main.go:141] libmachine: (kubernetes-upgrade-907863) DBG | domain kubernetes-upgrade-907863 has defined MAC address 52:54:00:f6:6f:99 in network mk-kubernetes-upgrade-907863
	I0806 00:19:51.119496   65839 main.go:141] libmachine: (kubernetes-upgrade-907863) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f6:6f:99", ip: ""} in network mk-kubernetes-upgrade-907863: {Iface:virbr4 ExpiryTime:2024-08-06 01:19:10 +0000 UTC Type:0 Mac:52:54:00:f6:6f:99 Iaid: IPaddr:192.168.72.112 Prefix:24 Hostname:kubernetes-upgrade-907863 Clientid:01:52:54:00:f6:6f:99}
	I0806 00:19:51.119518   65839 main.go:141] libmachine: (kubernetes-upgrade-907863) DBG | domain kubernetes-upgrade-907863 has defined IP address 192.168.72.112 and MAC address 52:54:00:f6:6f:99 in network mk-kubernetes-upgrade-907863
	I0806 00:19:51.119626   65839 main.go:141] libmachine: (kubernetes-upgrade-907863) Calling .DriverName
	I0806 00:19:51.120166   65839 main.go:141] libmachine: (kubernetes-upgrade-907863) Calling .DriverName
	I0806 00:19:51.120335   65839 main.go:141] libmachine: (kubernetes-upgrade-907863) Calling .DriverName
	I0806 00:19:51.120427   65839 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0806 00:19:51.120462   65839 main.go:141] libmachine: (kubernetes-upgrade-907863) Calling .GetSSHHostname
	I0806 00:19:51.120586   65839 ssh_runner.go:195] Run: cat /version.json
	I0806 00:19:51.120612   65839 main.go:141] libmachine: (kubernetes-upgrade-907863) Calling .GetSSHHostname
	I0806 00:19:51.123251   65839 main.go:141] libmachine: (kubernetes-upgrade-907863) DBG | domain kubernetes-upgrade-907863 has defined MAC address 52:54:00:f6:6f:99 in network mk-kubernetes-upgrade-907863
	I0806 00:19:51.123486   65839 main.go:141] libmachine: (kubernetes-upgrade-907863) DBG | domain kubernetes-upgrade-907863 has defined MAC address 52:54:00:f6:6f:99 in network mk-kubernetes-upgrade-907863
	I0806 00:19:51.123632   65839 main.go:141] libmachine: (kubernetes-upgrade-907863) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f6:6f:99", ip: ""} in network mk-kubernetes-upgrade-907863: {Iface:virbr4 ExpiryTime:2024-08-06 01:19:10 +0000 UTC Type:0 Mac:52:54:00:f6:6f:99 Iaid: IPaddr:192.168.72.112 Prefix:24 Hostname:kubernetes-upgrade-907863 Clientid:01:52:54:00:f6:6f:99}
	I0806 00:19:51.123658   65839 main.go:141] libmachine: (kubernetes-upgrade-907863) DBG | domain kubernetes-upgrade-907863 has defined IP address 192.168.72.112 and MAC address 52:54:00:f6:6f:99 in network mk-kubernetes-upgrade-907863
	I0806 00:19:51.123837   65839 main.go:141] libmachine: (kubernetes-upgrade-907863) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f6:6f:99", ip: ""} in network mk-kubernetes-upgrade-907863: {Iface:virbr4 ExpiryTime:2024-08-06 01:19:10 +0000 UTC Type:0 Mac:52:54:00:f6:6f:99 Iaid: IPaddr:192.168.72.112 Prefix:24 Hostname:kubernetes-upgrade-907863 Clientid:01:52:54:00:f6:6f:99}
	I0806 00:19:51.123881   65839 main.go:141] libmachine: (kubernetes-upgrade-907863) DBG | domain kubernetes-upgrade-907863 has defined IP address 192.168.72.112 and MAC address 52:54:00:f6:6f:99 in network mk-kubernetes-upgrade-907863
	I0806 00:19:51.123886   65839 main.go:141] libmachine: (kubernetes-upgrade-907863) Calling .GetSSHPort
	I0806 00:19:51.123986   65839 main.go:141] libmachine: (kubernetes-upgrade-907863) Calling .GetSSHPort
	I0806 00:19:51.124083   65839 main.go:141] libmachine: (kubernetes-upgrade-907863) Calling .GetSSHKeyPath
	I0806 00:19:51.124171   65839 main.go:141] libmachine: (kubernetes-upgrade-907863) Calling .GetSSHKeyPath
	I0806 00:19:51.124233   65839 main.go:141] libmachine: (kubernetes-upgrade-907863) Calling .GetSSHUsername
	I0806 00:19:51.124303   65839 main.go:141] libmachine: (kubernetes-upgrade-907863) Calling .GetSSHUsername
	I0806 00:19:51.124372   65839 sshutil.go:53] new ssh client: &{IP:192.168.72.112 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19373-9606/.minikube/machines/kubernetes-upgrade-907863/id_rsa Username:docker}
	I0806 00:19:51.124407   65839 sshutil.go:53] new ssh client: &{IP:192.168.72.112 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19373-9606/.minikube/machines/kubernetes-upgrade-907863/id_rsa Username:docker}
	I0806 00:19:51.200424   65839 ssh_runner.go:195] Run: systemctl --version
	I0806 00:19:51.223298   65839 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0806 00:19:51.376749   65839 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0806 00:19:51.382999   65839 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0806 00:19:51.383077   65839 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0806 00:19:51.393030   65839 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I0806 00:19:51.393056   65839 start.go:495] detecting cgroup driver to use...
	I0806 00:19:51.393108   65839 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0806 00:19:51.411422   65839 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0806 00:19:51.426851   65839 docker.go:217] disabling cri-docker service (if available) ...
	I0806 00:19:51.426908   65839 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0806 00:19:51.444162   65839 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0806 00:19:51.457979   65839 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0806 00:19:51.594974   65839 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0806 00:19:51.738958   65839 docker.go:233] disabling docker service ...
	I0806 00:19:51.739035   65839 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0806 00:19:51.755903   65839 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0806 00:19:51.769884   65839 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0806 00:19:51.914851   65839 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0806 00:19:52.074947   65839 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0806 00:19:52.092185   65839 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0806 00:19:52.172493   65839 binary.go:74] Not caching binary, using https://dl.k8s.io/release/v1.31.0-rc.0/bin/linux/amd64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.31.0-rc.0/bin/linux/amd64/kubeadm.sha256
	I0806 00:19:52.467157   65839 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I0806 00:19:52.467234   65839 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0806 00:19:52.508788   65839 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0806 00:19:52.508868   65839 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0806 00:19:52.578532   65839 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0806 00:19:52.649508   65839 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0806 00:19:52.727466   65839 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0806 00:19:52.865150   65839 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0806 00:19:53.023702   65839 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0806 00:19:53.182850   65839 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0806 00:19:53.266079   65839 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0806 00:19:53.310503   65839 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0806 00:19:53.330237   65839 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0806 00:19:53.577652   65839 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0806 00:19:54.242708   65839 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0806 00:19:54.242827   65839 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0806 00:19:54.248378   65839 start.go:563] Will wait 60s for crictl version
	I0806 00:19:54.248477   65839 ssh_runner.go:195] Run: which crictl
	I0806 00:19:54.253532   65839 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0806 00:19:54.320841   65839 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0806 00:19:54.320946   65839 ssh_runner.go:195] Run: crio --version
	I0806 00:19:54.368096   65839 ssh_runner.go:195] Run: crio --version
	I0806 00:19:54.581553   65839 out.go:177] * Preparing Kubernetes v1.31.0-rc.0 on CRI-O 1.29.1 ...
	I0806 00:19:54.582836   65839 main.go:141] libmachine: (kubernetes-upgrade-907863) Calling .GetIP
	I0806 00:19:54.585754   65839 main.go:141] libmachine: (kubernetes-upgrade-907863) DBG | domain kubernetes-upgrade-907863 has defined MAC address 52:54:00:f6:6f:99 in network mk-kubernetes-upgrade-907863
	I0806 00:19:54.586120   65839 main.go:141] libmachine: (kubernetes-upgrade-907863) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f6:6f:99", ip: ""} in network mk-kubernetes-upgrade-907863: {Iface:virbr4 ExpiryTime:2024-08-06 01:19:10 +0000 UTC Type:0 Mac:52:54:00:f6:6f:99 Iaid: IPaddr:192.168.72.112 Prefix:24 Hostname:kubernetes-upgrade-907863 Clientid:01:52:54:00:f6:6f:99}
	I0806 00:19:54.586151   65839 main.go:141] libmachine: (kubernetes-upgrade-907863) DBG | domain kubernetes-upgrade-907863 has defined IP address 192.168.72.112 and MAC address 52:54:00:f6:6f:99 in network mk-kubernetes-upgrade-907863
	I0806 00:19:54.586357   65839 ssh_runner.go:195] Run: grep 192.168.72.1	host.minikube.internal$ /etc/hosts
	I0806 00:19:54.620599   65839 kubeadm.go:883] updating cluster {Name:kubernetes-upgrade-907863 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19339/minikube-v1.33.1-1722248113-19339-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVe
rsion:v1.31.0-rc.0 ClusterName:kubernetes-upgrade-907863 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.72.112 Port:8443 KubernetesVersion:v1.31.0-rc.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPor
t:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0806 00:19:54.620828   65839 binary.go:74] Not caching binary, using https://dl.k8s.io/release/v1.31.0-rc.0/bin/linux/amd64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.31.0-rc.0/bin/linux/amd64/kubeadm.sha256
	I0806 00:19:54.901253   65839 binary.go:74] Not caching binary, using https://dl.k8s.io/release/v1.31.0-rc.0/bin/linux/amd64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.31.0-rc.0/bin/linux/amd64/kubeadm.sha256
	I0806 00:19:55.190930   65839 binary.go:74] Not caching binary, using https://dl.k8s.io/release/v1.31.0-rc.0/bin/linux/amd64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.31.0-rc.0/bin/linux/amd64/kubeadm.sha256
	I0806 00:19:55.894678   65839 preload.go:131] Checking if preload exists for k8s version v1.31.0-rc.0 and runtime crio
	I0806 00:19:55.894849   65839 binary.go:74] Not caching binary, using https://dl.k8s.io/release/v1.31.0-rc.0/bin/linux/amd64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.31.0-rc.0/bin/linux/amd64/kubeadm.sha256
	I0806 00:19:56.172259   65839 binary.go:74] Not caching binary, using https://dl.k8s.io/release/v1.31.0-rc.0/bin/linux/amd64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.31.0-rc.0/bin/linux/amd64/kubeadm.sha256
	I0806 00:19:56.445172   65839 binary.go:74] Not caching binary, using https://dl.k8s.io/release/v1.31.0-rc.0/bin/linux/amd64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.31.0-rc.0/bin/linux/amd64/kubeadm.sha256
	I0806 00:19:56.723386   65839 ssh_runner.go:195] Run: sudo crictl images --output json
	I0806 00:19:56.764778   65839 crio.go:514] all images are preloaded for cri-o runtime.
	I0806 00:19:56.764800   65839 crio.go:433] Images already preloaded, skipping extraction
	I0806 00:19:56.764852   65839 ssh_runner.go:195] Run: sudo crictl images --output json
	I0806 00:19:56.797365   65839 crio.go:514] all images are preloaded for cri-o runtime.
	I0806 00:19:56.797389   65839 cache_images.go:84] Images are preloaded, skipping loading
	I0806 00:19:56.797399   65839 kubeadm.go:934] updating node { 192.168.72.112 8443 v1.31.0-rc.0 crio true true} ...
	I0806 00:19:56.797530   65839 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.0-rc.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=kubernetes-upgrade-907863 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.72.112
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.0-rc.0 ClusterName:kubernetes-upgrade-907863 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0806 00:19:56.797620   65839 ssh_runner.go:195] Run: crio config
	I0806 00:19:56.844399   65839 cni.go:84] Creating CNI manager for ""
	I0806 00:19:56.844435   65839 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0806 00:19:56.844452   65839 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0806 00:19:56.844479   65839 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.72.112 APIServerPort:8443 KubernetesVersion:v1.31.0-rc.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:kubernetes-upgrade-907863 NodeName:kubernetes-upgrade-907863 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.72.112"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.72.112 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/cert
s/ca.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0806 00:19:56.844687   65839 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.72.112
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "kubernetes-upgrade-907863"
	  kubeletExtraArgs:
	    node-ip: 192.168.72.112
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.72.112"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.31.0-rc.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0806 00:19:56.844768   65839 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.0-rc.0
	I0806 00:19:56.856026   65839 binaries.go:44] Found k8s binaries, skipping transfer
	I0806 00:19:56.856088   65839 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0806 00:19:56.865919   65839 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (330 bytes)
	I0806 00:19:56.882934   65839 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (357 bytes)
	I0806 00:19:56.900718   65839 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2174 bytes)
	I0806 00:19:56.917468   65839 ssh_runner.go:195] Run: grep 192.168.72.112	control-plane.minikube.internal$ /etc/hosts
	I0806 00:19:56.921455   65839 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0806 00:19:57.070881   65839 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0806 00:19:57.086147   65839 certs.go:68] Setting up /home/jenkins/minikube-integration/19373-9606/.minikube/profiles/kubernetes-upgrade-907863 for IP: 192.168.72.112
	I0806 00:19:57.086170   65839 certs.go:194] generating shared ca certs ...
	I0806 00:19:57.086190   65839 certs.go:226] acquiring lock for ca certs: {Name:mkf35a042c1656d191f542eee7fa087aad4d29d8 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0806 00:19:57.086366   65839 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19373-9606/.minikube/ca.key
	I0806 00:19:57.086421   65839 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19373-9606/.minikube/proxy-client-ca.key
	I0806 00:19:57.086435   65839 certs.go:256] generating profile certs ...
	I0806 00:19:57.086538   65839 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19373-9606/.minikube/profiles/kubernetes-upgrade-907863/client.key
	I0806 00:19:57.086604   65839 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/19373-9606/.minikube/profiles/kubernetes-upgrade-907863/apiserver.key.777d71ca
	I0806 00:19:57.086669   65839 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19373-9606/.minikube/profiles/kubernetes-upgrade-907863/proxy-client.key
	I0806 00:19:57.086827   65839 certs.go:484] found cert: /home/jenkins/minikube-integration/19373-9606/.minikube/certs/16792.pem (1338 bytes)
	W0806 00:19:57.086879   65839 certs.go:480] ignoring /home/jenkins/minikube-integration/19373-9606/.minikube/certs/16792_empty.pem, impossibly tiny 0 bytes
	I0806 00:19:57.086893   65839 certs.go:484] found cert: /home/jenkins/minikube-integration/19373-9606/.minikube/certs/ca-key.pem (1679 bytes)
	I0806 00:19:57.086929   65839 certs.go:484] found cert: /home/jenkins/minikube-integration/19373-9606/.minikube/certs/ca.pem (1082 bytes)
	I0806 00:19:57.086967   65839 certs.go:484] found cert: /home/jenkins/minikube-integration/19373-9606/.minikube/certs/cert.pem (1123 bytes)
	I0806 00:19:57.086997   65839 certs.go:484] found cert: /home/jenkins/minikube-integration/19373-9606/.minikube/certs/key.pem (1679 bytes)
	I0806 00:19:57.087069   65839 certs.go:484] found cert: /home/jenkins/minikube-integration/19373-9606/.minikube/files/etc/ssl/certs/167922.pem (1708 bytes)
	I0806 00:19:57.087698   65839 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19373-9606/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0806 00:19:57.114543   65839 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19373-9606/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0806 00:19:57.138600   65839 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19373-9606/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0806 00:19:57.163583   65839 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19373-9606/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0806 00:19:57.188338   65839 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19373-9606/.minikube/profiles/kubernetes-upgrade-907863/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1436 bytes)
	I0806 00:19:57.213265   65839 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19373-9606/.minikube/profiles/kubernetes-upgrade-907863/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0806 00:19:57.238456   65839 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19373-9606/.minikube/profiles/kubernetes-upgrade-907863/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0806 00:19:57.262680   65839 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19373-9606/.minikube/profiles/kubernetes-upgrade-907863/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0806 00:19:57.286154   65839 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19373-9606/.minikube/files/etc/ssl/certs/167922.pem --> /usr/share/ca-certificates/167922.pem (1708 bytes)
	I0806 00:19:57.312411   65839 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19373-9606/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0806 00:19:57.339930   65839 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19373-9606/.minikube/certs/16792.pem --> /usr/share/ca-certificates/16792.pem (1338 bytes)
	I0806 00:19:57.365423   65839 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0806 00:19:57.382422   65839 ssh_runner.go:195] Run: openssl version
	I0806 00:19:57.388580   65839 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/167922.pem && ln -fs /usr/share/ca-certificates/167922.pem /etc/ssl/certs/167922.pem"
	I0806 00:19:57.401255   65839 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/167922.pem
	I0806 00:19:57.405860   65839 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Aug  5 23:03 /usr/share/ca-certificates/167922.pem
	I0806 00:19:57.405928   65839 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/167922.pem
	I0806 00:19:57.411878   65839 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/167922.pem /etc/ssl/certs/3ec20f2e.0"
	I0806 00:19:57.421398   65839 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0806 00:19:57.432599   65839 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0806 00:19:57.437125   65839 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Aug  5 22:50 /usr/share/ca-certificates/minikubeCA.pem
	I0806 00:19:57.437203   65839 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0806 00:19:57.442856   65839 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0806 00:19:57.452471   65839 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/16792.pem && ln -fs /usr/share/ca-certificates/16792.pem /etc/ssl/certs/16792.pem"
	I0806 00:19:57.463110   65839 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/16792.pem
	I0806 00:19:57.467684   65839 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Aug  5 23:03 /usr/share/ca-certificates/16792.pem
	I0806 00:19:57.467732   65839 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/16792.pem
	I0806 00:19:57.473161   65839 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/16792.pem /etc/ssl/certs/51391683.0"
	I0806 00:19:57.489351   65839 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0806 00:19:57.494041   65839 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0806 00:19:57.500301   65839 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0806 00:19:57.506985   65839 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0806 00:19:57.512954   65839 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0806 00:19:57.518862   65839 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0806 00:19:57.524814   65839 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0806 00:19:57.530531   65839 kubeadm.go:392] StartCluster: {Name:kubernetes-upgrade-907863 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19339/minikube-v1.33.1-1722248113-19339-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersi
on:v1.31.0-rc.0 ClusterName:kubernetes-upgrade-907863 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.72.112 Port:8443 KubernetesVersion:v1.31.0-rc.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0
MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0806 00:19:57.530640   65839 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0806 00:19:57.530697   65839 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0806 00:19:57.568448   65839 cri.go:89] found id: "b72493e493bbbfb09237539118d80154b3a2cae34285d46239d72cdbf3e6be78"
	I0806 00:19:57.568477   65839 cri.go:89] found id: "4dcd434757810bfc1a43b48b540bad35bb2daa0a129cc91f3c8f8f5c8a840d3f"
	I0806 00:19:57.568483   65839 cri.go:89] found id: "53ef65d4855fce877cf9f2e415966fb26c60456cf78eb1381b0fa7cc3e3b65eb"
	I0806 00:19:57.568488   65839 cri.go:89] found id: "4720cc8c7f5da78dbd241c36befde0d46687ef4735777fb51af652b93ea8d290"
	I0806 00:19:57.568492   65839 cri.go:89] found id: "ad00527cd5c8e92719898aac9febcbb9738228aac487e30f43c3138b056d8adc"
	I0806 00:19:57.568497   65839 cri.go:89] found id: "14d65fe8451ef85c31bae4559ca7aa7a9f6bea8038589e437995ada823c1c56b"
	I0806 00:19:57.568501   65839 cri.go:89] found id: "24ea605f46634adcc3c291a53df8b75d127f7dfe49e12976e580e9dc3d009811"
	I0806 00:19:57.568506   65839 cri.go:89] found id: "d36a3ffb78adc077bb974e9a30ffbceb8b729e2283053891df17808e4e624cce"
	I0806 00:19:57.568510   65839 cri.go:89] found id: "9d234ac38f8d47275740de26362119977185d7cb7eac909ff117e055728ff8fa"
	I0806 00:19:57.568518   65839 cri.go:89] found id: "39cfd7ab5a3fd77b45c7dc078e34e05f8bdc804bcd399b646de0fd771033cd49"
	I0806 00:19:57.568522   65839 cri.go:89] found id: ""
	I0806 00:19:57.568571   65839 ssh_runner.go:195] Run: sudo runc list -f json
	
	
	==> CRI-O <==
	Aug 06 00:20:38 kubernetes-upgrade-907863 crio[3015]: time="2024-08-06 00:20:38.560128788Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=72ce27f3-38fb-4573-878e-9267ebca9344 name=/runtime.v1.RuntimeService/Version
	Aug 06 00:20:38 kubernetes-upgrade-907863 crio[3015]: time="2024-08-06 00:20:38.561225505Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=7e763ec8-e18c-4f78-9a21-7fef60423a4e name=/runtime.v1.ImageService/ImageFsInfo
	Aug 06 00:20:38 kubernetes-upgrade-907863 crio[3015]: time="2024-08-06 00:20:38.561594981Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1722903638561574649,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:125243,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=7e763ec8-e18c-4f78-9a21-7fef60423a4e name=/runtime.v1.ImageService/ImageFsInfo
	Aug 06 00:20:38 kubernetes-upgrade-907863 crio[3015]: time="2024-08-06 00:20:38.562273898Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=28073b82-c0fa-4c6f-8ab7-332fde867217 name=/runtime.v1.RuntimeService/ListContainers
	Aug 06 00:20:38 kubernetes-upgrade-907863 crio[3015]: time="2024-08-06 00:20:38.562345119Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=28073b82-c0fa-4c6f-8ab7-332fde867217 name=/runtime.v1.RuntimeService/ListContainers
	Aug 06 00:20:38 kubernetes-upgrade-907863 crio[3015]: time="2024-08-06 00:20:38.563020767Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:a5a6c0c15144e35b2d82d6e5c5fafb7af65b03c1c7a8960b8266c30292fb67d5,PodSandboxId:8a50c5cbfcdc796b19bf34955a0e084944f50b3b928d3bdd491424d3d8a839dc,Metadata:&ContainerMetadata{Name:coredns,Attempt:2,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1722903635634869166,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6f6b679f8f-rsxhg,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e06c667c-e087-412d-9f29-5327c44624f5,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":
53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:40f2a28f017dc7040c9ddb57165a729b9b1896b58f158b9c47cdbe1c17a2e022,PodSandboxId:ebbc805d4572a510e7e7dfe0c455136cbed20e9d3c451664ce1d1e906bfe82ba,Metadata:&ContainerMetadata{Name:coredns,Attempt:2,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1722903635651347406,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6f6b679f8f-k6hlg,io.kubernetes.pod.namespace: kube-system,io.k
ubernetes.pod.uid: d6234dc3-6125-4dff-b0cf-9b3c91bde525,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:01dac4acf9a406b9148068a812b2f09cf3cab420a3e597485b2562f991080357,PodSandboxId:2c613a5bc6f66889c328c0a7331638eb2343bf8cfe6de7f5155f47781517a15b,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:2,},Image:&ImageSpec{Image:41cec1c4af04c4f503b82c359863ff255b10112d34f3a5a9db11ea64e4d70318,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:41cec1c4af04c4f503b82c359863ff255b10112d34f3a5a9db11ea64e4d70318,State:CONTAIN
ER_RUNNING,CreatedAt:1722903635599654060,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-lq9cd,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d8adf1bf-ff2f-4a63-a674-f1e4f0ac4c3a,},Annotations:map[string]string{io.kubernetes.container.hash: 16faf14d,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0349f4112fbfbdaa8af5c3e28976475ade87ee8c8c60064b48aa7ea8fa7cfac6,PodSandboxId:c9b6fb3dd0d2f87daab3cc7615517a2b58473f7b15befa96a69133299255d581,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:3,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:172
2903635603166012,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0d37c81c-0a57-4492-8d0a-67be715cc6c1,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8253810e0ea0f48b972e287db8b4390562655d245869d92f9d54bd4da04aab33,PodSandboxId:045678e3415569be12121b425eed83e7bd2abadbc779a6e20b0e73607e69bf98,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:3,},Image:&ImageSpec{Image:c7883f2335b7ce2c847a07069ce8eb191fdbfda3036f25fa95aaa384e0049ee0,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c7883f2335b7ce2c847a07069ce8eb191fdbfda3036f25fa95aaa384e0049ee0,State:CONTAINER_RUNNING,CreatedAt:1722903632540571139,
Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-kubernetes-upgrade-907863,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 424801322ef46d2df89d296d920cffb0,},Annotations:map[string]string{io.kubernetes.container.hash: 380ae747,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:30ec18b502cd8ef8aefd56c1e7176c6fd1428d5cd509c7bf0016bef32bcf8c9a,PodSandboxId:0a612b51ff5b56c26cd754bb17e5b7df5903ff08c7d851d08228d5ed2049ef60,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:fd01d5222f3a9f781835c02034d6dba130c06f2830545b028adb347d047d4b5c,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:fd01d5222f3a9f781835c02034d6dba130c06f2830545b028adb347d047d4b5c,State:CONTAINER_RUNNING,CreatedAt:1722903631908744
818,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-kubernetes-upgrade-907863,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: db9aa4f0f91684b2d80cd8d5dd1207e3,},Annotations:map[string]string{io.kubernetes.container.hash: e1cacf85,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6616621af7ac0abed5d194828d9147309ad65adb109ee99d83c92199904af697,PodSandboxId:15d0e73e0a265c5d5975d2288e48be6571766e7ac5de45ba59133278071da123,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1722903631884
923836,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-kubernetes-upgrade-907863,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a22fb3871d12d07740f5fc675c6fcc33,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a9415b012078eb3f1e33a013354a90fb6f48039e62f34eb2f2af0cf06486caa1,PodSandboxId:69a5f6b5a5643edce8335b9029fd231360526476d5e985a4773beb70ee84b696,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:0fd085a247d6c1cde9a85d6b4029ca518ab15b67ac3462fc1d4022df73605d1c,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:0fd085a247d6c1cde9a85d6b4029ca518ab15b67ac3462fc1d4022df73605d1c,State:CONTAINER_RUNNING,CreatedAt:1722903631896359359,Labels:map[string]
string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-kubernetes-upgrade-907863,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6ccfe832c2b126df6b6fe0c737f5b6c9,},Annotations:map[string]string{io.kubernetes.container.hash: f48bc30b,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:61a0d3fa8339169fc4d3f48e407eaae53ee7a4537d54b1ad6d80635e4e5ced7b,PodSandboxId:045678e3415569be12121b425eed83e7bd2abadbc779a6e20b0e73607e69bf98,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:c7883f2335b7ce2c847a07069ce8eb191fdbfda3036f25fa95aaa384e0049ee0,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c7883f2335b7ce2c847a07069ce8eb191fdbfda3036f25fa95aaa384e0049ee0,State:CONTAINER_EXITED,CreatedAt:1722903610005912434,Labels:map[string]string
{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-kubernetes-upgrade-907863,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 424801322ef46d2df89d296d920cffb0,},Annotations:map[string]string{io.kubernetes.container.hash: 380ae747,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:bd68bc943020cfc45efe4c7bf896c3fbfdeaf49947a1851ec5050b57f5349b33,PodSandboxId:c9b6fb3dd0d2f87daab3cc7615517a2b58473f7b15befa96a69133299255d581,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:2,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1722903609001275921,Labels:map[string]string{
io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0d37c81c-0a57-4492-8d0a-67be715cc6c1,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b72493e493bbbfb09237539118d80154b3a2cae34285d46239d72cdbf3e6be78,PodSandboxId:8a50c5cbfcdc796b19bf34955a0e084944f50b3b928d3bdd491424d3d8a839dc,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1722903595080701979,Labels:map[string]string{io.kubernetes.container.n
ame: coredns,io.kubernetes.pod.name: coredns-6f6b679f8f-rsxhg,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e06c667c-e087-412d-9f29-5327c44624f5,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4dcd434757810bfc1a43b48b540bad35bb2daa0a129cc91f3c8f8f5c8a840d3f,PodSandboxId:ebbc805d4572a510e7e7dfe0c455136cbed20e9d3c451664ce1d1e906bfe82ba,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,R
untimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1722903595070282285,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6f6b679f8f-k6hlg,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d6234dc3-6125-4dff-b0cf-9b3c91bde525,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:53ef65d4855fce877cf9f2e415966fb26c60456cf78eb1381b0fa7cc3e3b65eb,PodSandboxId:c99e147fd1a54c2213e916aa1e9a2628820f47c33d84c4057f2911395832692c,
Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:41cec1c4af04c4f503b82c359863ff255b10112d34f3a5a9db11ea64e4d70318,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:41cec1c4af04c4f503b82c359863ff255b10112d34f3a5a9db11ea64e4d70318,State:CONTAINER_EXITED,CreatedAt:1722903592945053596,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-lq9cd,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d8adf1bf-ff2f-4a63-a674-f1e4f0ac4c3a,},Annotations:map[string]string{io.kubernetes.container.hash: 16faf14d,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ad00527cd5c8e92719898aac9febcbb9738228aac487e30f43c3138b056d8adc,PodSandboxId:6d78ee3154662ac093cbd1219273a45de8da67be02f643f1ab86659ad17c838f,Metadata:&ContainerMetadata{Name:k
ube-scheduler,Attempt:1,},Image:&ImageSpec{Image:0fd085a247d6c1cde9a85d6b4029ca518ab15b67ac3462fc1d4022df73605d1c,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:0fd085a247d6c1cde9a85d6b4029ca518ab15b67ac3462fc1d4022df73605d1c,State:CONTAINER_EXITED,CreatedAt:1722903592768320241,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-kubernetes-upgrade-907863,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6ccfe832c2b126df6b6fe0c737f5b6c9,},Annotations:map[string]string{io.kubernetes.container.hash: f48bc30b,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4720cc8c7f5da78dbd241c36befde0d46687ef4735777fb51af652b93ea8d290,PodSandboxId:923a57d21a2816a8f9a35a19b52116c06af7ecf135733fb644ec6f5ad71f1b3c,Metadata:&ContainerMetadata{Name:kube-co
ntroller-manager,Attempt:1,},Image:&ImageSpec{Image:fd01d5222f3a9f781835c02034d6dba130c06f2830545b028adb347d047d4b5c,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:fd01d5222f3a9f781835c02034d6dba130c06f2830545b028adb347d047d4b5c,State:CONTAINER_EXITED,CreatedAt:1722903592771750724,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-kubernetes-upgrade-907863,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: db9aa4f0f91684b2d80cd8d5dd1207e3,},Annotations:map[string]string{io.kubernetes.container.hash: e1cacf85,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:14d65fe8451ef85c31bae4559ca7aa7a9f6bea8038589e437995ada823c1c56b,PodSandboxId:fec2a68913adbcef053bc123a54cd6126f3fec1aaff8007c642f4cd83b66adc1,Metadata:&Container
Metadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_EXITED,CreatedAt:1722903592666548077,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-kubernetes-upgrade-907863,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a22fb3871d12d07740f5fc675c6fcc33,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=28073b82-c0fa-4c6f-8ab7-332fde867217 name=/runtime.v1.RuntimeService/ListContainers
	Aug 06 00:20:38 kubernetes-upgrade-907863 crio[3015]: time="2024-08-06 00:20:38.603038297Z" level=debug msg="Request: &ListPodSandboxRequest{Filter:nil,}" file="otel-collector/interceptors.go:62" id=6b116d87-ab5a-4647-9440-755d29cb1735 name=/runtime.v1.RuntimeService/ListPodSandbox
	Aug 06 00:20:38 kubernetes-upgrade-907863 crio[3015]: time="2024-08-06 00:20:38.603442436Z" level=debug msg="Response: &ListPodSandboxResponse{Items:[]*PodSandbox{&PodSandbox{Id:ebbc805d4572a510e7e7dfe0c455136cbed20e9d3c451664ce1d1e906bfe82ba,Metadata:&PodSandboxMetadata{Name:coredns-6f6b679f8f-k6hlg,Uid:d6234dc3-6125-4dff-b0cf-9b3c91bde525,Namespace:kube-system,Attempt:2,},State:SANDBOX_READY,CreatedAt:1722903594812303297,Labels:map[string]string{io.kubernetes.container.name: POD,io.kubernetes.pod.name: coredns-6f6b679f8f-k6hlg,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d6234dc3-6125-4dff-b0cf-9b3c91bde525,k8s-app: kube-dns,pod-template-hash: 6f6b679f8f,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-08-06T00:19:38.460484131Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:8a50c5cbfcdc796b19bf34955a0e084944f50b3b928d3bdd491424d3d8a839dc,Metadata:&PodSandboxMetadata{Name:coredns-6f6b679f8f-rsxhg,Uid:e06c667c-e087-412d-9f29-5327c44624f5,Namespac
e:kube-system,Attempt:2,},State:SANDBOX_READY,CreatedAt:1722903594787527292,Labels:map[string]string{io.kubernetes.container.name: POD,io.kubernetes.pod.name: coredns-6f6b679f8f-rsxhg,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e06c667c-e087-412d-9f29-5327c44624f5,k8s-app: kube-dns,pod-template-hash: 6f6b679f8f,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-08-06T00:19:38.465269557Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:c9b6fb3dd0d2f87daab3cc7615517a2b58473f7b15befa96a69133299255d581,Metadata:&PodSandboxMetadata{Name:storage-provisioner,Uid:0d37c81c-0a57-4492-8d0a-67be715cc6c1,Namespace:kube-system,Attempt:2,},State:SANDBOX_READY,CreatedAt:1722903594522740211,Labels:map[string]string{addonmanager.kubernetes.io/mode: Reconcile,integration-test: storage-provisioner,io.kubernetes.container.name: POD,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0d37c81c-0a57-4492-8d0a-67be715cc6c1,},An
notations:map[string]string{kubectl.kubernetes.io/last-applied-configuration: {\"apiVersion\":\"v1\",\"kind\":\"Pod\",\"metadata\":{\"annotations\":{},\"labels\":{\"addonmanager.kubernetes.io/mode\":\"Reconcile\",\"integration-test\":\"storage-provisioner\"},\"name\":\"storage-provisioner\",\"namespace\":\"kube-system\"},\"spec\":{\"containers\":[{\"command\":[\"/storage-provisioner\"],\"image\":\"gcr.io/k8s-minikube/storage-provisioner:v5\",\"imagePullPolicy\":\"IfNotPresent\",\"name\":\"storage-provisioner\",\"volumeMounts\":[{\"mountPath\":\"/tmp\",\"name\":\"tmp\"}]}],\"hostNetwork\":true,\"serviceAccountName\":\"storage-provisioner\",\"volumes\":[{\"hostPath\":{\"path\":\"/tmp\",\"type\":\"Directory\"},\"name\":\"tmp\"}]}}\n,kubernetes.io/config.seen: 2024-08-06T00:19:39.929425765Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:045678e3415569be12121b425eed83e7bd2abadbc779a6e20b0e73607e69bf98,Metadata:&PodSandboxMetadata{Name:kube-apiserver-kubernetes-upgrade-907863,Uid:424801322ef46d
2df89d296d920cffb0,Namespace:kube-system,Attempt:2,},State:SANDBOX_READY,CreatedAt:1722903594467317286,Labels:map[string]string{component: kube-apiserver,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-apiserver-kubernetes-upgrade-907863,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 424801322ef46d2df89d296d920cffb0,tier: control-plane,},Annotations:map[string]string{kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint: 192.168.72.112:8443,kubernetes.io/config.hash: 424801322ef46d2df89d296d920cffb0,kubernetes.io/config.seen: 2024-08-06T00:19:27.885042062Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:2c613a5bc6f66889c328c0a7331638eb2343bf8cfe6de7f5155f47781517a15b,Metadata:&PodSandboxMetadata{Name:kube-proxy-lq9cd,Uid:d8adf1bf-ff2f-4a63-a674-f1e4f0ac4c3a,Namespace:kube-system,Attempt:2,},State:SANDBOX_READY,CreatedAt:1722903594450407735,Labels:map[string]string{controller-revision-hash: 677fdd8cbc,io.kubernetes.container.name: POD,io.kubernetes
.pod.name: kube-proxy-lq9cd,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d8adf1bf-ff2f-4a63-a674-f1e4f0ac4c3a,k8s-app: kube-proxy,pod-template-generation: 1,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-08-06T00:19:38.572185869Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:0a612b51ff5b56c26cd754bb17e5b7df5903ff08c7d851d08228d5ed2049ef60,Metadata:&PodSandboxMetadata{Name:kube-controller-manager-kubernetes-upgrade-907863,Uid:db9aa4f0f91684b2d80cd8d5dd1207e3,Namespace:kube-system,Attempt:2,},State:SANDBOX_READY,CreatedAt:1722903594413938140,Labels:map[string]string{component: kube-controller-manager,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-controller-manager-kubernetes-upgrade-907863,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: db9aa4f0f91684b2d80cd8d5dd1207e3,tier: control-plane,},Annotations:map[string]string{kubernetes.io/config.hash: db9aa4f0f91684b2d80cd8d5dd1207e3,kubernetes.io/config.seen: 2024-08-06T00:19
:27.885045224Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:69a5f6b5a5643edce8335b9029fd231360526476d5e985a4773beb70ee84b696,Metadata:&PodSandboxMetadata{Name:kube-scheduler-kubernetes-upgrade-907863,Uid:6ccfe832c2b126df6b6fe0c737f5b6c9,Namespace:kube-system,Attempt:2,},State:SANDBOX_READY,CreatedAt:1722903594332579575,Labels:map[string]string{component: kube-scheduler,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-scheduler-kubernetes-upgrade-907863,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6ccfe832c2b126df6b6fe0c737f5b6c9,tier: control-plane,},Annotations:map[string]string{kubernetes.io/config.hash: 6ccfe832c2b126df6b6fe0c737f5b6c9,kubernetes.io/config.seen: 2024-08-06T00:19:27.885046219Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:15d0e73e0a265c5d5975d2288e48be6571766e7ac5de45ba59133278071da123,Metadata:&PodSandboxMetadata{Name:etcd-kubernetes-upgrade-907863,Uid:a22fb3871d12d07740f5fc675c6fcc33,Namespace:kube-system,Atte
mpt:2,},State:SANDBOX_READY,CreatedAt:1722903594320251756,Labels:map[string]string{component: etcd,io.kubernetes.container.name: POD,io.kubernetes.pod.name: etcd-kubernetes-upgrade-907863,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a22fb3871d12d07740f5fc675c6fcc33,tier: control-plane,},Annotations:map[string]string{kubeadm.kubernetes.io/etcd.advertise-client-urls: https://192.168.72.112:2379,kubernetes.io/config.hash: a22fb3871d12d07740f5fc675c6fcc33,kubernetes.io/config.seen: 2024-08-06T00:19:27.951593125Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:6d78ee3154662ac093cbd1219273a45de8da67be02f643f1ab86659ad17c838f,Metadata:&PodSandboxMetadata{Name:kube-scheduler-kubernetes-upgrade-907863,Uid:6ccfe832c2b126df6b6fe0c737f5b6c9,Namespace:kube-system,Attempt:1,},State:SANDBOX_NOTREADY,CreatedAt:1722903592234147491,Labels:map[string]string{component: kube-scheduler,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-scheduler-kubernetes-upgrade-907863,io.kuber
netes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6ccfe832c2b126df6b6fe0c737f5b6c9,tier: control-plane,},Annotations:map[string]string{kubernetes.io/config.hash: 6ccfe832c2b126df6b6fe0c737f5b6c9,kubernetes.io/config.seen: 2024-08-06T00:19:27.885046219Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:923a57d21a2816a8f9a35a19b52116c06af7ecf135733fb644ec6f5ad71f1b3c,Metadata:&PodSandboxMetadata{Name:kube-controller-manager-kubernetes-upgrade-907863,Uid:db9aa4f0f91684b2d80cd8d5dd1207e3,Namespace:kube-system,Attempt:1,},State:SANDBOX_NOTREADY,CreatedAt:1722903592233684966,Labels:map[string]string{component: kube-controller-manager,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-controller-manager-kubernetes-upgrade-907863,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: db9aa4f0f91684b2d80cd8d5dd1207e3,tier: control-plane,},Annotations:map[string]string{kubernetes.io/config.hash: db9aa4f0f91684b2d80cd8d5dd1207e3,kubernetes.io/config.seen: 2024-08-06T00:19:27
.885045224Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:c99e147fd1a54c2213e916aa1e9a2628820f47c33d84c4057f2911395832692c,Metadata:&PodSandboxMetadata{Name:kube-proxy-lq9cd,Uid:d8adf1bf-ff2f-4a63-a674-f1e4f0ac4c3a,Namespace:kube-system,Attempt:1,},State:SANDBOX_NOTREADY,CreatedAt:1722903592231371412,Labels:map[string]string{controller-revision-hash: 677fdd8cbc,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-proxy-lq9cd,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d8adf1bf-ff2f-4a63-a674-f1e4f0ac4c3a,k8s-app: kube-proxy,pod-template-generation: 1,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-08-06T00:19:38.572185869Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:fec2a68913adbcef053bc123a54cd6126f3fec1aaff8007c642f4cd83b66adc1,Metadata:&PodSandboxMetadata{Name:etcd-kubernetes-upgrade-907863,Uid:a22fb3871d12d07740f5fc675c6fcc33,Namespace:kube-system,Attempt:1,},State:SANDBOX_NOTREADY,CreatedAt:1722903592230240879,La
bels:map[string]string{component: etcd,io.kubernetes.container.name: POD,io.kubernetes.pod.name: etcd-kubernetes-upgrade-907863,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a22fb3871d12d07740f5fc675c6fcc33,tier: control-plane,},Annotations:map[string]string{kubeadm.kubernetes.io/etcd.advertise-client-urls: https://192.168.72.112:2379,kubernetes.io/config.hash: a22fb3871d12d07740f5fc675c6fcc33,kubernetes.io/config.seen: 2024-08-06T00:19:27.951593125Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:d6259fcc88c156da5e39a01b097575d1918a191a15201be3b7e66ff108b581f5,Metadata:&PodSandboxMetadata{Name:kube-apiserver-kubernetes-upgrade-907863,Uid:424801322ef46d2df89d296d920cffb0,Namespace:kube-system,Attempt:1,},State:SANDBOX_NOTREADY,CreatedAt:1722903592165229019,Labels:map[string]string{component: kube-apiserver,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-apiserver-kubernetes-upgrade-907863,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4248
01322ef46d2df89d296d920cffb0,tier: control-plane,},Annotations:map[string]string{kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint: 192.168.72.112:8443,kubernetes.io/config.hash: 424801322ef46d2df89d296d920cffb0,kubernetes.io/config.seen: 2024-08-06T00:19:27.885042062Z,kubernetes.io/config.source: file,},RuntimeHandler:,},},}" file="otel-collector/interceptors.go:74" id=6b116d87-ab5a-4647-9440-755d29cb1735 name=/runtime.v1.RuntimeService/ListPodSandbox
	Aug 06 00:20:38 kubernetes-upgrade-907863 crio[3015]: time="2024-08-06 00:20:38.604338334Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=4cf11d00-867f-4041-9818-8fa5e4978b4d name=/runtime.v1.RuntimeService/ListContainers
	Aug 06 00:20:38 kubernetes-upgrade-907863 crio[3015]: time="2024-08-06 00:20:38.604414714Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=4cf11d00-867f-4041-9818-8fa5e4978b4d name=/runtime.v1.RuntimeService/ListContainers
	Aug 06 00:20:38 kubernetes-upgrade-907863 crio[3015]: time="2024-08-06 00:20:38.604831385Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:a5a6c0c15144e35b2d82d6e5c5fafb7af65b03c1c7a8960b8266c30292fb67d5,PodSandboxId:8a50c5cbfcdc796b19bf34955a0e084944f50b3b928d3bdd491424d3d8a839dc,Metadata:&ContainerMetadata{Name:coredns,Attempt:2,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1722903635634869166,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6f6b679f8f-rsxhg,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e06c667c-e087-412d-9f29-5327c44624f5,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":
53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:40f2a28f017dc7040c9ddb57165a729b9b1896b58f158b9c47cdbe1c17a2e022,PodSandboxId:ebbc805d4572a510e7e7dfe0c455136cbed20e9d3c451664ce1d1e906bfe82ba,Metadata:&ContainerMetadata{Name:coredns,Attempt:2,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1722903635651347406,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6f6b679f8f-k6hlg,io.kubernetes.pod.namespace: kube-system,io.k
ubernetes.pod.uid: d6234dc3-6125-4dff-b0cf-9b3c91bde525,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:01dac4acf9a406b9148068a812b2f09cf3cab420a3e597485b2562f991080357,PodSandboxId:2c613a5bc6f66889c328c0a7331638eb2343bf8cfe6de7f5155f47781517a15b,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:2,},Image:&ImageSpec{Image:41cec1c4af04c4f503b82c359863ff255b10112d34f3a5a9db11ea64e4d70318,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:41cec1c4af04c4f503b82c359863ff255b10112d34f3a5a9db11ea64e4d70318,State:CONTAIN
ER_RUNNING,CreatedAt:1722903635599654060,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-lq9cd,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d8adf1bf-ff2f-4a63-a674-f1e4f0ac4c3a,},Annotations:map[string]string{io.kubernetes.container.hash: 16faf14d,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0349f4112fbfbdaa8af5c3e28976475ade87ee8c8c60064b48aa7ea8fa7cfac6,PodSandboxId:c9b6fb3dd0d2f87daab3cc7615517a2b58473f7b15befa96a69133299255d581,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:3,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:172
2903635603166012,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0d37c81c-0a57-4492-8d0a-67be715cc6c1,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8253810e0ea0f48b972e287db8b4390562655d245869d92f9d54bd4da04aab33,PodSandboxId:045678e3415569be12121b425eed83e7bd2abadbc779a6e20b0e73607e69bf98,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:3,},Image:&ImageSpec{Image:c7883f2335b7ce2c847a07069ce8eb191fdbfda3036f25fa95aaa384e0049ee0,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c7883f2335b7ce2c847a07069ce8eb191fdbfda3036f25fa95aaa384e0049ee0,State:CONTAINER_RUNNING,CreatedAt:1722903632540571139,
Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-kubernetes-upgrade-907863,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 424801322ef46d2df89d296d920cffb0,},Annotations:map[string]string{io.kubernetes.container.hash: 380ae747,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:30ec18b502cd8ef8aefd56c1e7176c6fd1428d5cd509c7bf0016bef32bcf8c9a,PodSandboxId:0a612b51ff5b56c26cd754bb17e5b7df5903ff08c7d851d08228d5ed2049ef60,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:fd01d5222f3a9f781835c02034d6dba130c06f2830545b028adb347d047d4b5c,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:fd01d5222f3a9f781835c02034d6dba130c06f2830545b028adb347d047d4b5c,State:CONTAINER_RUNNING,CreatedAt:1722903631908744
818,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-kubernetes-upgrade-907863,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: db9aa4f0f91684b2d80cd8d5dd1207e3,},Annotations:map[string]string{io.kubernetes.container.hash: e1cacf85,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6616621af7ac0abed5d194828d9147309ad65adb109ee99d83c92199904af697,PodSandboxId:15d0e73e0a265c5d5975d2288e48be6571766e7ac5de45ba59133278071da123,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1722903631884
923836,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-kubernetes-upgrade-907863,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a22fb3871d12d07740f5fc675c6fcc33,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a9415b012078eb3f1e33a013354a90fb6f48039e62f34eb2f2af0cf06486caa1,PodSandboxId:69a5f6b5a5643edce8335b9029fd231360526476d5e985a4773beb70ee84b696,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:0fd085a247d6c1cde9a85d6b4029ca518ab15b67ac3462fc1d4022df73605d1c,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:0fd085a247d6c1cde9a85d6b4029ca518ab15b67ac3462fc1d4022df73605d1c,State:CONTAINER_RUNNING,CreatedAt:1722903631896359359,Labels:map[string]
string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-kubernetes-upgrade-907863,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6ccfe832c2b126df6b6fe0c737f5b6c9,},Annotations:map[string]string{io.kubernetes.container.hash: f48bc30b,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:61a0d3fa8339169fc4d3f48e407eaae53ee7a4537d54b1ad6d80635e4e5ced7b,PodSandboxId:045678e3415569be12121b425eed83e7bd2abadbc779a6e20b0e73607e69bf98,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:c7883f2335b7ce2c847a07069ce8eb191fdbfda3036f25fa95aaa384e0049ee0,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c7883f2335b7ce2c847a07069ce8eb191fdbfda3036f25fa95aaa384e0049ee0,State:CONTAINER_EXITED,CreatedAt:1722903610005912434,Labels:map[string]string
{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-kubernetes-upgrade-907863,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 424801322ef46d2df89d296d920cffb0,},Annotations:map[string]string{io.kubernetes.container.hash: 380ae747,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:bd68bc943020cfc45efe4c7bf896c3fbfdeaf49947a1851ec5050b57f5349b33,PodSandboxId:c9b6fb3dd0d2f87daab3cc7615517a2b58473f7b15befa96a69133299255d581,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:2,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1722903609001275921,Labels:map[string]string{
io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0d37c81c-0a57-4492-8d0a-67be715cc6c1,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b72493e493bbbfb09237539118d80154b3a2cae34285d46239d72cdbf3e6be78,PodSandboxId:8a50c5cbfcdc796b19bf34955a0e084944f50b3b928d3bdd491424d3d8a839dc,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1722903595080701979,Labels:map[string]string{io.kubernetes.container.n
ame: coredns,io.kubernetes.pod.name: coredns-6f6b679f8f-rsxhg,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e06c667c-e087-412d-9f29-5327c44624f5,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4dcd434757810bfc1a43b48b540bad35bb2daa0a129cc91f3c8f8f5c8a840d3f,PodSandboxId:ebbc805d4572a510e7e7dfe0c455136cbed20e9d3c451664ce1d1e906bfe82ba,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,R
untimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1722903595070282285,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6f6b679f8f-k6hlg,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d6234dc3-6125-4dff-b0cf-9b3c91bde525,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:53ef65d4855fce877cf9f2e415966fb26c60456cf78eb1381b0fa7cc3e3b65eb,PodSandboxId:c99e147fd1a54c2213e916aa1e9a2628820f47c33d84c4057f2911395832692c,
Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:41cec1c4af04c4f503b82c359863ff255b10112d34f3a5a9db11ea64e4d70318,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:41cec1c4af04c4f503b82c359863ff255b10112d34f3a5a9db11ea64e4d70318,State:CONTAINER_EXITED,CreatedAt:1722903592945053596,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-lq9cd,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d8adf1bf-ff2f-4a63-a674-f1e4f0ac4c3a,},Annotations:map[string]string{io.kubernetes.container.hash: 16faf14d,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ad00527cd5c8e92719898aac9febcbb9738228aac487e30f43c3138b056d8adc,PodSandboxId:6d78ee3154662ac093cbd1219273a45de8da67be02f643f1ab86659ad17c838f,Metadata:&ContainerMetadata{Name:k
ube-scheduler,Attempt:1,},Image:&ImageSpec{Image:0fd085a247d6c1cde9a85d6b4029ca518ab15b67ac3462fc1d4022df73605d1c,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:0fd085a247d6c1cde9a85d6b4029ca518ab15b67ac3462fc1d4022df73605d1c,State:CONTAINER_EXITED,CreatedAt:1722903592768320241,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-kubernetes-upgrade-907863,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6ccfe832c2b126df6b6fe0c737f5b6c9,},Annotations:map[string]string{io.kubernetes.container.hash: f48bc30b,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4720cc8c7f5da78dbd241c36befde0d46687ef4735777fb51af652b93ea8d290,PodSandboxId:923a57d21a2816a8f9a35a19b52116c06af7ecf135733fb644ec6f5ad71f1b3c,Metadata:&ContainerMetadata{Name:kube-co
ntroller-manager,Attempt:1,},Image:&ImageSpec{Image:fd01d5222f3a9f781835c02034d6dba130c06f2830545b028adb347d047d4b5c,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:fd01d5222f3a9f781835c02034d6dba130c06f2830545b028adb347d047d4b5c,State:CONTAINER_EXITED,CreatedAt:1722903592771750724,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-kubernetes-upgrade-907863,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: db9aa4f0f91684b2d80cd8d5dd1207e3,},Annotations:map[string]string{io.kubernetes.container.hash: e1cacf85,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:14d65fe8451ef85c31bae4559ca7aa7a9f6bea8038589e437995ada823c1c56b,PodSandboxId:fec2a68913adbcef053bc123a54cd6126f3fec1aaff8007c642f4cd83b66adc1,Metadata:&Container
Metadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_EXITED,CreatedAt:1722903592666548077,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-kubernetes-upgrade-907863,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a22fb3871d12d07740f5fc675c6fcc33,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=4cf11d00-867f-4041-9818-8fa5e4978b4d name=/runtime.v1.RuntimeService/ListContainers
	Aug 06 00:20:38 kubernetes-upgrade-907863 crio[3015]: time="2024-08-06 00:20:38.617442677Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=d242494d-fa97-4993-9647-fb72a791b173 name=/runtime.v1.RuntimeService/Version
	Aug 06 00:20:38 kubernetes-upgrade-907863 crio[3015]: time="2024-08-06 00:20:38.617528936Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=d242494d-fa97-4993-9647-fb72a791b173 name=/runtime.v1.RuntimeService/Version
	Aug 06 00:20:38 kubernetes-upgrade-907863 crio[3015]: time="2024-08-06 00:20:38.619310756Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=2330dae8-813f-4a46-abd7-e23877aa4b2f name=/runtime.v1.ImageService/ImageFsInfo
	Aug 06 00:20:38 kubernetes-upgrade-907863 crio[3015]: time="2024-08-06 00:20:38.619668002Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1722903638619645934,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:125243,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=2330dae8-813f-4a46-abd7-e23877aa4b2f name=/runtime.v1.ImageService/ImageFsInfo
	Aug 06 00:20:38 kubernetes-upgrade-907863 crio[3015]: time="2024-08-06 00:20:38.620428233Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=b49a002e-513b-4101-9e7c-ac3dfa8aec02 name=/runtime.v1.RuntimeService/ListContainers
	Aug 06 00:20:38 kubernetes-upgrade-907863 crio[3015]: time="2024-08-06 00:20:38.620508123Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=b49a002e-513b-4101-9e7c-ac3dfa8aec02 name=/runtime.v1.RuntimeService/ListContainers
	Aug 06 00:20:38 kubernetes-upgrade-907863 crio[3015]: time="2024-08-06 00:20:38.621260415Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:a5a6c0c15144e35b2d82d6e5c5fafb7af65b03c1c7a8960b8266c30292fb67d5,PodSandboxId:8a50c5cbfcdc796b19bf34955a0e084944f50b3b928d3bdd491424d3d8a839dc,Metadata:&ContainerMetadata{Name:coredns,Attempt:2,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1722903635634869166,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6f6b679f8f-rsxhg,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e06c667c-e087-412d-9f29-5327c44624f5,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":
53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:40f2a28f017dc7040c9ddb57165a729b9b1896b58f158b9c47cdbe1c17a2e022,PodSandboxId:ebbc805d4572a510e7e7dfe0c455136cbed20e9d3c451664ce1d1e906bfe82ba,Metadata:&ContainerMetadata{Name:coredns,Attempt:2,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1722903635651347406,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6f6b679f8f-k6hlg,io.kubernetes.pod.namespace: kube-system,io.k
ubernetes.pod.uid: d6234dc3-6125-4dff-b0cf-9b3c91bde525,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:01dac4acf9a406b9148068a812b2f09cf3cab420a3e597485b2562f991080357,PodSandboxId:2c613a5bc6f66889c328c0a7331638eb2343bf8cfe6de7f5155f47781517a15b,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:2,},Image:&ImageSpec{Image:41cec1c4af04c4f503b82c359863ff255b10112d34f3a5a9db11ea64e4d70318,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:41cec1c4af04c4f503b82c359863ff255b10112d34f3a5a9db11ea64e4d70318,State:CONTAIN
ER_RUNNING,CreatedAt:1722903635599654060,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-lq9cd,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d8adf1bf-ff2f-4a63-a674-f1e4f0ac4c3a,},Annotations:map[string]string{io.kubernetes.container.hash: 16faf14d,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0349f4112fbfbdaa8af5c3e28976475ade87ee8c8c60064b48aa7ea8fa7cfac6,PodSandboxId:c9b6fb3dd0d2f87daab3cc7615517a2b58473f7b15befa96a69133299255d581,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:3,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:172
2903635603166012,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0d37c81c-0a57-4492-8d0a-67be715cc6c1,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8253810e0ea0f48b972e287db8b4390562655d245869d92f9d54bd4da04aab33,PodSandboxId:045678e3415569be12121b425eed83e7bd2abadbc779a6e20b0e73607e69bf98,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:3,},Image:&ImageSpec{Image:c7883f2335b7ce2c847a07069ce8eb191fdbfda3036f25fa95aaa384e0049ee0,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c7883f2335b7ce2c847a07069ce8eb191fdbfda3036f25fa95aaa384e0049ee0,State:CONTAINER_RUNNING,CreatedAt:1722903632540571139,
Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-kubernetes-upgrade-907863,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 424801322ef46d2df89d296d920cffb0,},Annotations:map[string]string{io.kubernetes.container.hash: 380ae747,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:30ec18b502cd8ef8aefd56c1e7176c6fd1428d5cd509c7bf0016bef32bcf8c9a,PodSandboxId:0a612b51ff5b56c26cd754bb17e5b7df5903ff08c7d851d08228d5ed2049ef60,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:fd01d5222f3a9f781835c02034d6dba130c06f2830545b028adb347d047d4b5c,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:fd01d5222f3a9f781835c02034d6dba130c06f2830545b028adb347d047d4b5c,State:CONTAINER_RUNNING,CreatedAt:1722903631908744
818,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-kubernetes-upgrade-907863,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: db9aa4f0f91684b2d80cd8d5dd1207e3,},Annotations:map[string]string{io.kubernetes.container.hash: e1cacf85,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6616621af7ac0abed5d194828d9147309ad65adb109ee99d83c92199904af697,PodSandboxId:15d0e73e0a265c5d5975d2288e48be6571766e7ac5de45ba59133278071da123,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1722903631884
923836,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-kubernetes-upgrade-907863,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a22fb3871d12d07740f5fc675c6fcc33,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a9415b012078eb3f1e33a013354a90fb6f48039e62f34eb2f2af0cf06486caa1,PodSandboxId:69a5f6b5a5643edce8335b9029fd231360526476d5e985a4773beb70ee84b696,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:0fd085a247d6c1cde9a85d6b4029ca518ab15b67ac3462fc1d4022df73605d1c,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:0fd085a247d6c1cde9a85d6b4029ca518ab15b67ac3462fc1d4022df73605d1c,State:CONTAINER_RUNNING,CreatedAt:1722903631896359359,Labels:map[string]
string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-kubernetes-upgrade-907863,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6ccfe832c2b126df6b6fe0c737f5b6c9,},Annotations:map[string]string{io.kubernetes.container.hash: f48bc30b,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:61a0d3fa8339169fc4d3f48e407eaae53ee7a4537d54b1ad6d80635e4e5ced7b,PodSandboxId:045678e3415569be12121b425eed83e7bd2abadbc779a6e20b0e73607e69bf98,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:c7883f2335b7ce2c847a07069ce8eb191fdbfda3036f25fa95aaa384e0049ee0,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c7883f2335b7ce2c847a07069ce8eb191fdbfda3036f25fa95aaa384e0049ee0,State:CONTAINER_EXITED,CreatedAt:1722903610005912434,Labels:map[string]string
{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-kubernetes-upgrade-907863,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 424801322ef46d2df89d296d920cffb0,},Annotations:map[string]string{io.kubernetes.container.hash: 380ae747,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:bd68bc943020cfc45efe4c7bf896c3fbfdeaf49947a1851ec5050b57f5349b33,PodSandboxId:c9b6fb3dd0d2f87daab3cc7615517a2b58473f7b15befa96a69133299255d581,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:2,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1722903609001275921,Labels:map[string]string{
io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0d37c81c-0a57-4492-8d0a-67be715cc6c1,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b72493e493bbbfb09237539118d80154b3a2cae34285d46239d72cdbf3e6be78,PodSandboxId:8a50c5cbfcdc796b19bf34955a0e084944f50b3b928d3bdd491424d3d8a839dc,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1722903595080701979,Labels:map[string]string{io.kubernetes.container.n
ame: coredns,io.kubernetes.pod.name: coredns-6f6b679f8f-rsxhg,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e06c667c-e087-412d-9f29-5327c44624f5,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4dcd434757810bfc1a43b48b540bad35bb2daa0a129cc91f3c8f8f5c8a840d3f,PodSandboxId:ebbc805d4572a510e7e7dfe0c455136cbed20e9d3c451664ce1d1e906bfe82ba,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,R
untimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1722903595070282285,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6f6b679f8f-k6hlg,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d6234dc3-6125-4dff-b0cf-9b3c91bde525,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:53ef65d4855fce877cf9f2e415966fb26c60456cf78eb1381b0fa7cc3e3b65eb,PodSandboxId:c99e147fd1a54c2213e916aa1e9a2628820f47c33d84c4057f2911395832692c,
Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:41cec1c4af04c4f503b82c359863ff255b10112d34f3a5a9db11ea64e4d70318,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:41cec1c4af04c4f503b82c359863ff255b10112d34f3a5a9db11ea64e4d70318,State:CONTAINER_EXITED,CreatedAt:1722903592945053596,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-lq9cd,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d8adf1bf-ff2f-4a63-a674-f1e4f0ac4c3a,},Annotations:map[string]string{io.kubernetes.container.hash: 16faf14d,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ad00527cd5c8e92719898aac9febcbb9738228aac487e30f43c3138b056d8adc,PodSandboxId:6d78ee3154662ac093cbd1219273a45de8da67be02f643f1ab86659ad17c838f,Metadata:&ContainerMetadata{Name:k
ube-scheduler,Attempt:1,},Image:&ImageSpec{Image:0fd085a247d6c1cde9a85d6b4029ca518ab15b67ac3462fc1d4022df73605d1c,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:0fd085a247d6c1cde9a85d6b4029ca518ab15b67ac3462fc1d4022df73605d1c,State:CONTAINER_EXITED,CreatedAt:1722903592768320241,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-kubernetes-upgrade-907863,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6ccfe832c2b126df6b6fe0c737f5b6c9,},Annotations:map[string]string{io.kubernetes.container.hash: f48bc30b,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4720cc8c7f5da78dbd241c36befde0d46687ef4735777fb51af652b93ea8d290,PodSandboxId:923a57d21a2816a8f9a35a19b52116c06af7ecf135733fb644ec6f5ad71f1b3c,Metadata:&ContainerMetadata{Name:kube-co
ntroller-manager,Attempt:1,},Image:&ImageSpec{Image:fd01d5222f3a9f781835c02034d6dba130c06f2830545b028adb347d047d4b5c,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:fd01d5222f3a9f781835c02034d6dba130c06f2830545b028adb347d047d4b5c,State:CONTAINER_EXITED,CreatedAt:1722903592771750724,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-kubernetes-upgrade-907863,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: db9aa4f0f91684b2d80cd8d5dd1207e3,},Annotations:map[string]string{io.kubernetes.container.hash: e1cacf85,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:14d65fe8451ef85c31bae4559ca7aa7a9f6bea8038589e437995ada823c1c56b,PodSandboxId:fec2a68913adbcef053bc123a54cd6126f3fec1aaff8007c642f4cd83b66adc1,Metadata:&Container
Metadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_EXITED,CreatedAt:1722903592666548077,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-kubernetes-upgrade-907863,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a22fb3871d12d07740f5fc675c6fcc33,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=b49a002e-513b-4101-9e7c-ac3dfa8aec02 name=/runtime.v1.RuntimeService/ListContainers
	Aug 06 00:20:38 kubernetes-upgrade-907863 crio[3015]: time="2024-08-06 00:20:38.659770424Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=4f894fc6-8c38-4241-aad5-0aae9e849f0c name=/runtime.v1.RuntimeService/Version
	Aug 06 00:20:38 kubernetes-upgrade-907863 crio[3015]: time="2024-08-06 00:20:38.659911205Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=4f894fc6-8c38-4241-aad5-0aae9e849f0c name=/runtime.v1.RuntimeService/Version
	Aug 06 00:20:38 kubernetes-upgrade-907863 crio[3015]: time="2024-08-06 00:20:38.661292775Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=884b2349-846a-407f-baa1-4d02ed9fe665 name=/runtime.v1.ImageService/ImageFsInfo
	Aug 06 00:20:38 kubernetes-upgrade-907863 crio[3015]: time="2024-08-06 00:20:38.661659485Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1722903638661636546,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:125243,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=884b2349-846a-407f-baa1-4d02ed9fe665 name=/runtime.v1.ImageService/ImageFsInfo
	Aug 06 00:20:38 kubernetes-upgrade-907863 crio[3015]: time="2024-08-06 00:20:38.662200208Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=22aa99b3-8b62-4956-bcf3-105cd3ef438e name=/runtime.v1.RuntimeService/ListContainers
	Aug 06 00:20:38 kubernetes-upgrade-907863 crio[3015]: time="2024-08-06 00:20:38.662270055Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=22aa99b3-8b62-4956-bcf3-105cd3ef438e name=/runtime.v1.RuntimeService/ListContainers
	Aug 06 00:20:38 kubernetes-upgrade-907863 crio[3015]: time="2024-08-06 00:20:38.662852975Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:a5a6c0c15144e35b2d82d6e5c5fafb7af65b03c1c7a8960b8266c30292fb67d5,PodSandboxId:8a50c5cbfcdc796b19bf34955a0e084944f50b3b928d3bdd491424d3d8a839dc,Metadata:&ContainerMetadata{Name:coredns,Attempt:2,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1722903635634869166,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6f6b679f8f-rsxhg,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e06c667c-e087-412d-9f29-5327c44624f5,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":
53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:40f2a28f017dc7040c9ddb57165a729b9b1896b58f158b9c47cdbe1c17a2e022,PodSandboxId:ebbc805d4572a510e7e7dfe0c455136cbed20e9d3c451664ce1d1e906bfe82ba,Metadata:&ContainerMetadata{Name:coredns,Attempt:2,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1722903635651347406,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6f6b679f8f-k6hlg,io.kubernetes.pod.namespace: kube-system,io.k
ubernetes.pod.uid: d6234dc3-6125-4dff-b0cf-9b3c91bde525,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:01dac4acf9a406b9148068a812b2f09cf3cab420a3e597485b2562f991080357,PodSandboxId:2c613a5bc6f66889c328c0a7331638eb2343bf8cfe6de7f5155f47781517a15b,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:2,},Image:&ImageSpec{Image:41cec1c4af04c4f503b82c359863ff255b10112d34f3a5a9db11ea64e4d70318,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:41cec1c4af04c4f503b82c359863ff255b10112d34f3a5a9db11ea64e4d70318,State:CONTAIN
ER_RUNNING,CreatedAt:1722903635599654060,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-lq9cd,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d8adf1bf-ff2f-4a63-a674-f1e4f0ac4c3a,},Annotations:map[string]string{io.kubernetes.container.hash: 16faf14d,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0349f4112fbfbdaa8af5c3e28976475ade87ee8c8c60064b48aa7ea8fa7cfac6,PodSandboxId:c9b6fb3dd0d2f87daab3cc7615517a2b58473f7b15befa96a69133299255d581,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:3,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:172
2903635603166012,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0d37c81c-0a57-4492-8d0a-67be715cc6c1,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8253810e0ea0f48b972e287db8b4390562655d245869d92f9d54bd4da04aab33,PodSandboxId:045678e3415569be12121b425eed83e7bd2abadbc779a6e20b0e73607e69bf98,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:3,},Image:&ImageSpec{Image:c7883f2335b7ce2c847a07069ce8eb191fdbfda3036f25fa95aaa384e0049ee0,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c7883f2335b7ce2c847a07069ce8eb191fdbfda3036f25fa95aaa384e0049ee0,State:CONTAINER_RUNNING,CreatedAt:1722903632540571139,
Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-kubernetes-upgrade-907863,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 424801322ef46d2df89d296d920cffb0,},Annotations:map[string]string{io.kubernetes.container.hash: 380ae747,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:30ec18b502cd8ef8aefd56c1e7176c6fd1428d5cd509c7bf0016bef32bcf8c9a,PodSandboxId:0a612b51ff5b56c26cd754bb17e5b7df5903ff08c7d851d08228d5ed2049ef60,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:fd01d5222f3a9f781835c02034d6dba130c06f2830545b028adb347d047d4b5c,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:fd01d5222f3a9f781835c02034d6dba130c06f2830545b028adb347d047d4b5c,State:CONTAINER_RUNNING,CreatedAt:1722903631908744
818,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-kubernetes-upgrade-907863,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: db9aa4f0f91684b2d80cd8d5dd1207e3,},Annotations:map[string]string{io.kubernetes.container.hash: e1cacf85,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6616621af7ac0abed5d194828d9147309ad65adb109ee99d83c92199904af697,PodSandboxId:15d0e73e0a265c5d5975d2288e48be6571766e7ac5de45ba59133278071da123,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1722903631884
923836,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-kubernetes-upgrade-907863,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a22fb3871d12d07740f5fc675c6fcc33,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a9415b012078eb3f1e33a013354a90fb6f48039e62f34eb2f2af0cf06486caa1,PodSandboxId:69a5f6b5a5643edce8335b9029fd231360526476d5e985a4773beb70ee84b696,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:0fd085a247d6c1cde9a85d6b4029ca518ab15b67ac3462fc1d4022df73605d1c,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:0fd085a247d6c1cde9a85d6b4029ca518ab15b67ac3462fc1d4022df73605d1c,State:CONTAINER_RUNNING,CreatedAt:1722903631896359359,Labels:map[string]
string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-kubernetes-upgrade-907863,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6ccfe832c2b126df6b6fe0c737f5b6c9,},Annotations:map[string]string{io.kubernetes.container.hash: f48bc30b,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:61a0d3fa8339169fc4d3f48e407eaae53ee7a4537d54b1ad6d80635e4e5ced7b,PodSandboxId:045678e3415569be12121b425eed83e7bd2abadbc779a6e20b0e73607e69bf98,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:c7883f2335b7ce2c847a07069ce8eb191fdbfda3036f25fa95aaa384e0049ee0,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c7883f2335b7ce2c847a07069ce8eb191fdbfda3036f25fa95aaa384e0049ee0,State:CONTAINER_EXITED,CreatedAt:1722903610005912434,Labels:map[string]string
{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-kubernetes-upgrade-907863,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 424801322ef46d2df89d296d920cffb0,},Annotations:map[string]string{io.kubernetes.container.hash: 380ae747,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:bd68bc943020cfc45efe4c7bf896c3fbfdeaf49947a1851ec5050b57f5349b33,PodSandboxId:c9b6fb3dd0d2f87daab3cc7615517a2b58473f7b15befa96a69133299255d581,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:2,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1722903609001275921,Labels:map[string]string{
io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0d37c81c-0a57-4492-8d0a-67be715cc6c1,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b72493e493bbbfb09237539118d80154b3a2cae34285d46239d72cdbf3e6be78,PodSandboxId:8a50c5cbfcdc796b19bf34955a0e084944f50b3b928d3bdd491424d3d8a839dc,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1722903595080701979,Labels:map[string]string{io.kubernetes.container.n
ame: coredns,io.kubernetes.pod.name: coredns-6f6b679f8f-rsxhg,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e06c667c-e087-412d-9f29-5327c44624f5,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4dcd434757810bfc1a43b48b540bad35bb2daa0a129cc91f3c8f8f5c8a840d3f,PodSandboxId:ebbc805d4572a510e7e7dfe0c455136cbed20e9d3c451664ce1d1e906bfe82ba,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,R
untimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1722903595070282285,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6f6b679f8f-k6hlg,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d6234dc3-6125-4dff-b0cf-9b3c91bde525,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:53ef65d4855fce877cf9f2e415966fb26c60456cf78eb1381b0fa7cc3e3b65eb,PodSandboxId:c99e147fd1a54c2213e916aa1e9a2628820f47c33d84c4057f2911395832692c,
Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:41cec1c4af04c4f503b82c359863ff255b10112d34f3a5a9db11ea64e4d70318,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:41cec1c4af04c4f503b82c359863ff255b10112d34f3a5a9db11ea64e4d70318,State:CONTAINER_EXITED,CreatedAt:1722903592945053596,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-lq9cd,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d8adf1bf-ff2f-4a63-a674-f1e4f0ac4c3a,},Annotations:map[string]string{io.kubernetes.container.hash: 16faf14d,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ad00527cd5c8e92719898aac9febcbb9738228aac487e30f43c3138b056d8adc,PodSandboxId:6d78ee3154662ac093cbd1219273a45de8da67be02f643f1ab86659ad17c838f,Metadata:&ContainerMetadata{Name:k
ube-scheduler,Attempt:1,},Image:&ImageSpec{Image:0fd085a247d6c1cde9a85d6b4029ca518ab15b67ac3462fc1d4022df73605d1c,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:0fd085a247d6c1cde9a85d6b4029ca518ab15b67ac3462fc1d4022df73605d1c,State:CONTAINER_EXITED,CreatedAt:1722903592768320241,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-kubernetes-upgrade-907863,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6ccfe832c2b126df6b6fe0c737f5b6c9,},Annotations:map[string]string{io.kubernetes.container.hash: f48bc30b,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4720cc8c7f5da78dbd241c36befde0d46687ef4735777fb51af652b93ea8d290,PodSandboxId:923a57d21a2816a8f9a35a19b52116c06af7ecf135733fb644ec6f5ad71f1b3c,Metadata:&ContainerMetadata{Name:kube-co
ntroller-manager,Attempt:1,},Image:&ImageSpec{Image:fd01d5222f3a9f781835c02034d6dba130c06f2830545b028adb347d047d4b5c,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:fd01d5222f3a9f781835c02034d6dba130c06f2830545b028adb347d047d4b5c,State:CONTAINER_EXITED,CreatedAt:1722903592771750724,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-kubernetes-upgrade-907863,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: db9aa4f0f91684b2d80cd8d5dd1207e3,},Annotations:map[string]string{io.kubernetes.container.hash: e1cacf85,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:14d65fe8451ef85c31bae4559ca7aa7a9f6bea8038589e437995ada823c1c56b,PodSandboxId:fec2a68913adbcef053bc123a54cd6126f3fec1aaff8007c642f4cd83b66adc1,Metadata:&Container
Metadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_EXITED,CreatedAt:1722903592666548077,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-kubernetes-upgrade-907863,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a22fb3871d12d07740f5fc675c6fcc33,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=22aa99b3-8b62-4956-bcf3-105cd3ef438e name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                              CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	40f2a28f017dc       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4   3 seconds ago       Running             coredns                   2                   ebbc805d4572a       coredns-6f6b679f8f-k6hlg
	a5a6c0c15144e       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4   3 seconds ago       Running             coredns                   2                   8a50c5cbfcdc7       coredns-6f6b679f8f-rsxhg
	0349f4112fbfb       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562   3 seconds ago       Running             storage-provisioner       3                   c9b6fb3dd0d2f       storage-provisioner
	01dac4acf9a40       41cec1c4af04c4f503b82c359863ff255b10112d34f3a5a9db11ea64e4d70318   3 seconds ago       Running             kube-proxy                2                   2c613a5bc6f66       kube-proxy-lq9cd
	8253810e0ea0f       c7883f2335b7ce2c847a07069ce8eb191fdbfda3036f25fa95aaa384e0049ee0   6 seconds ago       Running             kube-apiserver            3                   045678e341556       kube-apiserver-kubernetes-upgrade-907863
	30ec18b502cd8       fd01d5222f3a9f781835c02034d6dba130c06f2830545b028adb347d047d4b5c   6 seconds ago       Running             kube-controller-manager   2                   0a612b51ff5b5       kube-controller-manager-kubernetes-upgrade-907863
	a9415b012078e       0fd085a247d6c1cde9a85d6b4029ca518ab15b67ac3462fc1d4022df73605d1c   6 seconds ago       Running             kube-scheduler            2                   69a5f6b5a5643       kube-scheduler-kubernetes-upgrade-907863
	6616621af7ac0       2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4   6 seconds ago       Running             etcd                      2                   15d0e73e0a265       etcd-kubernetes-upgrade-907863
	61a0d3fa83391       c7883f2335b7ce2c847a07069ce8eb191fdbfda3036f25fa95aaa384e0049ee0   28 seconds ago      Exited              kube-apiserver            2                   045678e341556       kube-apiserver-kubernetes-upgrade-907863
	bd68bc943020c       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562   29 seconds ago      Exited              storage-provisioner       2                   c9b6fb3dd0d2f       storage-provisioner
	b72493e493bbb       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4   43 seconds ago      Exited              coredns                   1                   8a50c5cbfcdc7       coredns-6f6b679f8f-rsxhg
	4dcd434757810       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4   43 seconds ago      Exited              coredns                   1                   ebbc805d4572a       coredns-6f6b679f8f-k6hlg
	53ef65d4855fc       41cec1c4af04c4f503b82c359863ff255b10112d34f3a5a9db11ea64e4d70318   45 seconds ago      Exited              kube-proxy                1                   c99e147fd1a54       kube-proxy-lq9cd
	4720cc8c7f5da       fd01d5222f3a9f781835c02034d6dba130c06f2830545b028adb347d047d4b5c   45 seconds ago      Exited              kube-controller-manager   1                   923a57d21a281       kube-controller-manager-kubernetes-upgrade-907863
	ad00527cd5c8e       0fd085a247d6c1cde9a85d6b4029ca518ab15b67ac3462fc1d4022df73605d1c   45 seconds ago      Exited              kube-scheduler            1                   6d78ee3154662       kube-scheduler-kubernetes-upgrade-907863
	14d65fe8451ef       2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4   46 seconds ago      Exited              etcd                      1                   fec2a68913adb       etcd-kubernetes-upgrade-907863
	
	
	==> coredns [40f2a28f017dc7040c9ddb57165a729b9b1896b58f158b9c47cdbe1c17a2e022] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 591cf328cccc12bc490481273e738df59329c62c0b729d94e8b61db9961c2fa5f046dd37f1cf888b953814040d180f52594972691cd6ff41be96639138a43908
	CoreDNS-1.11.1
	linux/amd64, go1.20.7, ae2bbc2
	
	
	==> coredns [4dcd434757810bfc1a43b48b540bad35bb2daa0a129cc91f3c8f8f5c8a840d3f] <==
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 591cf328cccc12bc490481273e738df59329c62c0b729d94e8b61db9961c2fa5f046dd37f1cf888b953814040d180f52594972691cd6ff41be96639138a43908
	CoreDNS-1.11.1
	linux/amd64, go1.20.7, ae2bbc2
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/health: Going into lameduck mode for 5s
	
	
	==> coredns [a5a6c0c15144e35b2d82d6e5c5fafb7af65b03c1c7a8960b8266c30292fb67d5] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 591cf328cccc12bc490481273e738df59329c62c0b729d94e8b61db9961c2fa5f046dd37f1cf888b953814040d180f52594972691cd6ff41be96639138a43908
	CoreDNS-1.11.1
	linux/amd64, go1.20.7, ae2bbc2
	
	
	==> coredns [b72493e493bbbfb09237539118d80154b3a2cae34285d46239d72cdbf3e6be78] <==
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 591cf328cccc12bc490481273e738df59329c62c0b729d94e8b61db9961c2fa5f046dd37f1cf888b953814040d180f52594972691cd6ff41be96639138a43908
	CoreDNS-1.11.1
	linux/amd64, go1.20.7, ae2bbc2
	[INFO] plugin/health: Going into lameduck mode for 5s
	
	
	==> describe nodes <==
	Name:               kubernetes-upgrade-907863
	Roles:              <none>
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=kubernetes-upgrade-907863
	                    kubernetes.io/os=linux
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Tue, 06 Aug 2024 00:19:30 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  kubernetes-upgrade-907863
	  AcquireTime:     <unset>
	  RenewTime:       Tue, 06 Aug 2024 00:20:34 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Tue, 06 Aug 2024 00:20:34 +0000   Tue, 06 Aug 2024 00:19:29 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Tue, 06 Aug 2024 00:20:34 +0000   Tue, 06 Aug 2024 00:19:29 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Tue, 06 Aug 2024 00:20:34 +0000   Tue, 06 Aug 2024 00:19:29 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Tue, 06 Aug 2024 00:20:34 +0000   Tue, 06 Aug 2024 00:19:33 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.72.112
	  Hostname:    kubernetes-upgrade-907863
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 e3d7ff4abe5944858ff72aeafbd4529f
	  System UUID:                e3d7ff4a-be59-4485-8ff7-2aeafbd4529f
	  Boot ID:                    6d69752a-247d-415f-8084-90ef5d1f9f38
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.31.0-rc.0
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (8 in total)
	  Namespace                   Name                                                 CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                                 ------------  ----------  ---------------  -------------  ---
	  kube-system                 coredns-6f6b679f8f-k6hlg                             100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     61s
	  kube-system                 coredns-6f6b679f8f-rsxhg                             100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     61s
	  kube-system                 etcd-kubernetes-upgrade-907863                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (4%!)(MISSING)       0 (0%!)(MISSING)         62s
	  kube-system                 kube-apiserver-kubernetes-upgrade-907863             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         62s
	  kube-system                 kube-controller-manager-kubernetes-upgrade-907863    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         60s
	  kube-system                 kube-proxy-lq9cd                                     0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         61s
	  kube-system                 kube-scheduler-kubernetes-upgrade-907863             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         64s
	  kube-system                 storage-provisioner                                  0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         60s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                850m (42%!)(MISSING)   0 (0%!)(MISSING)
	  memory             240Mi (11%!)(MISSING)  340Mi (16%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)       0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)       0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                From             Message
	  ----    ------                   ----               ----             -------
	  Normal  Starting                 59s                kube-proxy       
	  Normal  Starting                 2s                 kube-proxy       
	  Normal  Starting                 72s                kubelet          Starting kubelet.
	  Normal  NodeAllocatableEnforced  72s                kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  71s (x8 over 72s)  kubelet          Node kubernetes-upgrade-907863 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    71s (x8 over 72s)  kubelet          Node kubernetes-upgrade-907863 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     71s (x7 over 72s)  kubelet          Node kubernetes-upgrade-907863 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           62s                node-controller  Node kubernetes-upgrade-907863 event: Registered Node kubernetes-upgrade-907863 in Controller
	  Normal  RegisteredNode           1s                 node-controller  Node kubernetes-upgrade-907863 event: Registered Node kubernetes-upgrade-907863 in Controller
	
	
	==> dmesg <==
	[  +0.000002] NFSD: Unable to initialize client recovery tracking! (-2)
	[  +5.383892] systemd-fstab-generator[569]: Ignoring "noauto" option for root device
	[  +0.190949] systemd-fstab-generator[581]: Ignoring "noauto" option for root device
	[  +0.243978] systemd-fstab-generator[595]: Ignoring "noauto" option for root device
	[  +0.140910] systemd-fstab-generator[607]: Ignoring "noauto" option for root device
	[  +0.632044] systemd-fstab-generator[636]: Ignoring "noauto" option for root device
	[  +6.176817] systemd-fstab-generator[734]: Ignoring "noauto" option for root device
	[  +0.068452] kauditd_printk_skb: 130 callbacks suppressed
	[  +2.056591] systemd-fstab-generator[857]: Ignoring "noauto" option for root device
	[ +10.925219] systemd-fstab-generator[1245]: Ignoring "noauto" option for root device
	[  +0.103432] kauditd_printk_skb: 97 callbacks suppressed
	[ +12.799618] systemd-fstab-generator[2191]: Ignoring "noauto" option for root device
	[  +0.074422] kauditd_printk_skb: 106 callbacks suppressed
	[  +0.067493] systemd-fstab-generator[2203]: Ignoring "noauto" option for root device
	[  +0.162583] systemd-fstab-generator[2217]: Ignoring "noauto" option for root device
	[  +0.154572] systemd-fstab-generator[2229]: Ignoring "noauto" option for root device
	[  +1.430001] systemd-fstab-generator[2847]: Ignoring "noauto" option for root device
	[  +3.575785] systemd-fstab-generator[3756]: Ignoring "noauto" option for root device
	[  +0.084894] kauditd_printk_skb: 278 callbacks suppressed
	[Aug 6 00:20] kauditd_printk_skb: 12 callbacks suppressed
	[  +6.458644] systemd-fstab-generator[4019]: Ignoring "noauto" option for root device
	[  +0.087698] kauditd_printk_skb: 2 callbacks suppressed
	[ +19.434798] kauditd_printk_skb: 24 callbacks suppressed
	[  +5.276404] systemd-fstab-generator[4583]: Ignoring "noauto" option for root device
	[  +0.119170] kauditd_printk_skb: 33 callbacks suppressed
	
	
	==> etcd [14d65fe8451ef85c31bae4559ca7aa7a9f6bea8038589e437995ada823c1c56b] <==
	{"level":"info","ts":"2024-08-06T00:19:53.412926Z","caller":"etcdserver/server.go:532","msg":"No snapshot found. Recovering WAL from scratch!"}
	{"level":"info","ts":"2024-08-06T00:19:53.446229Z","caller":"etcdserver/raft.go:530","msg":"restarting local member","cluster-id":"edb50d136e932dfe","local-member-id":"b009fc366101617b","commit-index":383}
	{"level":"info","ts":"2024-08-06T00:19:53.446582Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"b009fc366101617b switched to configuration voters=()"}
	{"level":"info","ts":"2024-08-06T00:19:53.446679Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"b009fc366101617b became follower at term 2"}
	{"level":"info","ts":"2024-08-06T00:19:53.446721Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"newRaft b009fc366101617b [peers: [], term: 2, commit: 383, applied: 0, lastindex: 383, lastterm: 2]"}
	{"level":"warn","ts":"2024-08-06T00:19:53.449632Z","caller":"auth/store.go:1241","msg":"simple token is not cryptographically signed"}
	{"level":"info","ts":"2024-08-06T00:19:53.455244Z","caller":"mvcc/kvstore.go:418","msg":"kvstore restored","current-rev":376}
	{"level":"info","ts":"2024-08-06T00:19:53.459458Z","caller":"etcdserver/quota.go:94","msg":"enabled backend quota with default value","quota-name":"v3-applier","quota-size-bytes":2147483648,"quota-size":"2.1 GB"}
	{"level":"info","ts":"2024-08-06T00:19:53.464628Z","caller":"etcdserver/corrupt.go:96","msg":"starting initial corruption check","local-member-id":"b009fc366101617b","timeout":"7s"}
	{"level":"info","ts":"2024-08-06T00:19:53.465151Z","caller":"etcdserver/corrupt.go:177","msg":"initial corruption checking passed; no corruption","local-member-id":"b009fc366101617b"}
	{"level":"info","ts":"2024-08-06T00:19:53.465261Z","caller":"etcdserver/server.go:867","msg":"starting etcd server","local-member-id":"b009fc366101617b","local-server-version":"3.5.15","cluster-version":"to_be_decided"}
	{"level":"info","ts":"2024-08-06T00:19:53.465887Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2024-08-06T00:19:53.474871Z","caller":"etcdserver/server.go:767","msg":"starting initial election tick advance","election-ticks":10}
	{"level":"info","ts":"2024-08-06T00:19:53.475734Z","caller":"fileutil/purge.go:50","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/snap","suffix":"snap.db","max":5,"interval":"30s"}
	{"level":"info","ts":"2024-08-06T00:19:53.477499Z","caller":"fileutil/purge.go:50","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/snap","suffix":"snap","max":5,"interval":"30s"}
	{"level":"info","ts":"2024-08-06T00:19:53.478064Z","caller":"fileutil/purge.go:50","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/wal","suffix":"wal","max":5,"interval":"30s"}
	{"level":"info","ts":"2024-08-06T00:19:53.477198Z","caller":"embed/etcd.go:728","msg":"starting with client TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
	{"level":"info","ts":"2024-08-06T00:19:53.478546Z","caller":"embed/etcd.go:279","msg":"now serving peer/client/metrics","local-member-id":"b009fc366101617b","initial-advertise-peer-urls":["https://192.168.72.112:2380"],"listen-peer-urls":["https://192.168.72.112:2380"],"advertise-client-urls":["https://192.168.72.112:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.168.72.112:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2024-08-06T00:19:53.478874Z","caller":"embed/etcd.go:870","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2024-08-06T00:19:53.477221Z","caller":"embed/etcd.go:599","msg":"serving peer traffic","address":"192.168.72.112:2380"}
	{"level":"info","ts":"2024-08-06T00:19:53.485571Z","caller":"embed/etcd.go:571","msg":"cmux::serve","address":"192.168.72.112:2380"}
	{"level":"info","ts":"2024-08-06T00:19:53.477464Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"b009fc366101617b switched to configuration voters=(12684947135951626619)"}
	{"level":"info","ts":"2024-08-06T00:19:53.485741Z","caller":"membership/cluster.go:421","msg":"added member","cluster-id":"edb50d136e932dfe","local-member-id":"b009fc366101617b","added-peer-id":"b009fc366101617b","added-peer-peer-urls":["https://192.168.72.112:2380"]}
	{"level":"info","ts":"2024-08-06T00:19:53.486014Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"edb50d136e932dfe","local-member-id":"b009fc366101617b","cluster-version":"3.5"}
	{"level":"info","ts":"2024-08-06T00:19:53.486066Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	
	
	==> etcd [6616621af7ac0abed5d194828d9147309ad65adb109ee99d83c92199904af697] <==
	{"level":"info","ts":"2024-08-06T00:20:32.156187Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2024-08-06T00:20:32.160447Z","caller":"embed/etcd.go:728","msg":"starting with client TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
	{"level":"info","ts":"2024-08-06T00:20:32.161091Z","caller":"embed/etcd.go:279","msg":"now serving peer/client/metrics","local-member-id":"b009fc366101617b","initial-advertise-peer-urls":["https://192.168.72.112:2380"],"listen-peer-urls":["https://192.168.72.112:2380"],"advertise-client-urls":["https://192.168.72.112:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.168.72.112:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2024-08-06T00:20:32.161186Z","caller":"embed/etcd.go:870","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2024-08-06T00:20:32.161281Z","caller":"fileutil/purge.go:50","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/snap","suffix":"snap.db","max":5,"interval":"30s"}
	{"level":"info","ts":"2024-08-06T00:20:32.171064Z","caller":"fileutil/purge.go:50","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/snap","suffix":"snap","max":5,"interval":"30s"}
	{"level":"info","ts":"2024-08-06T00:20:32.171105Z","caller":"fileutil/purge.go:50","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/wal","suffix":"wal","max":5,"interval":"30s"}
	{"level":"info","ts":"2024-08-06T00:20:32.161885Z","caller":"embed/etcd.go:599","msg":"serving peer traffic","address":"192.168.72.112:2380"}
	{"level":"info","ts":"2024-08-06T00:20:32.171305Z","caller":"embed/etcd.go:571","msg":"cmux::serve","address":"192.168.72.112:2380"}
	{"level":"info","ts":"2024-08-06T00:20:32.223266Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"b009fc366101617b is starting a new election at term 2"}
	{"level":"info","ts":"2024-08-06T00:20:32.223321Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"b009fc366101617b became pre-candidate at term 2"}
	{"level":"info","ts":"2024-08-06T00:20:32.223345Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"b009fc366101617b received MsgPreVoteResp from b009fc366101617b at term 2"}
	{"level":"info","ts":"2024-08-06T00:20:32.223357Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"b009fc366101617b became candidate at term 3"}
	{"level":"info","ts":"2024-08-06T00:20:32.223362Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"b009fc366101617b received MsgVoteResp from b009fc366101617b at term 3"}
	{"level":"info","ts":"2024-08-06T00:20:32.223371Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"b009fc366101617b became leader at term 3"}
	{"level":"info","ts":"2024-08-06T00:20:32.223377Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: b009fc366101617b elected leader b009fc366101617b at term 3"}
	{"level":"info","ts":"2024-08-06T00:20:32.227979Z","caller":"etcdserver/server.go:2118","msg":"published local member to cluster through raft","local-member-id":"b009fc366101617b","local-member-attributes":"{Name:kubernetes-upgrade-907863 ClientURLs:[https://192.168.72.112:2379]}","request-path":"/0/members/b009fc366101617b/attributes","cluster-id":"edb50d136e932dfe","publish-timeout":"7s"}
	{"level":"info","ts":"2024-08-06T00:20:32.228160Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-08-06T00:20:32.228974Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-08-06T00:20:32.229095Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2024-08-06T00:20:32.229134Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2024-08-06T00:20:32.233512Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2024-08-06T00:20:32.234375Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.72.112:2379"}
	{"level":"info","ts":"2024-08-06T00:20:32.253150Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2024-08-06T00:20:32.255990Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	
	
	==> kernel <==
	 00:20:39 up 1 min,  0 users,  load average: 1.40, 0.46, 0.16
	Linux kubernetes-upgrade-907863 5.10.207 #1 SMP Mon Jul 29 15:19:02 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kube-apiserver [61a0d3fa8339169fc4d3f48e407eaae53ee7a4537d54b1ad6d80635e4e5ced7b] <==
	I0806 00:20:10.169209       1 server.go:144] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	W0806 00:20:10.605011       1 logging.go:55] [core] [Channel #1 SubChannel #3]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0806 00:20:10.605234       1 logging.go:55] [core] [Channel #2 SubChannel #4]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	I0806 00:20:10.605289       1 shared_informer.go:313] Waiting for caches to sync for node_authorizer
	I0806 00:20:10.611521       1 shared_informer.go:313] Waiting for caches to sync for *generic.policySource[*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicy,*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicyBinding,k8s.io/apiserver/pkg/admission/plugin/policy/validating.Validator]
	I0806 00:20:10.619552       1 plugins.go:157] Loaded 12 mutating admission controller(s) successfully in the following order: NamespaceLifecycle,LimitRanger,ServiceAccount,NodeRestriction,TaintNodesByCondition,Priority,DefaultTolerationSeconds,DefaultStorageClass,StorageObjectInUseProtection,RuntimeClass,DefaultIngressClass,MutatingAdmissionWebhook.
	I0806 00:20:10.619571       1 plugins.go:160] Loaded 13 validating admission controller(s) successfully in the following order: LimitRanger,ServiceAccount,PodSecurity,Priority,PersistentVolumeClaimResize,RuntimeClass,CertificateApproval,CertificateSigning,ClusterTrustBundleAttest,CertificateSubjectRestriction,ValidatingAdmissionPolicy,ValidatingAdmissionWebhook,ResourceQuota.
	I0806 00:20:10.619864       1 instance.go:232] Using reconciler: lease
	W0806 00:20:10.620863       1 logging.go:55] [core] [Channel #5 SubChannel #6]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0806 00:20:11.605578       1 logging.go:55] [core] [Channel #1 SubChannel #3]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0806 00:20:11.605748       1 logging.go:55] [core] [Channel #2 SubChannel #4]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0806 00:20:11.621878       1 logging.go:55] [core] [Channel #5 SubChannel #6]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0806 00:20:13.092722       1 logging.go:55] [core] [Channel #2 SubChannel #4]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0806 00:20:13.117037       1 logging.go:55] [core] [Channel #1 SubChannel #3]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0806 00:20:13.451879       1 logging.go:55] [core] [Channel #5 SubChannel #6]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0806 00:20:15.636111       1 logging.go:55] [core] [Channel #1 SubChannel #3]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0806 00:20:16.118848       1 logging.go:55] [core] [Channel #2 SubChannel #4]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0806 00:20:16.197465       1 logging.go:55] [core] [Channel #5 SubChannel #6]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0806 00:20:19.842692       1 logging.go:55] [core] [Channel #1 SubChannel #3]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0806 00:20:19.934169       1 logging.go:55] [core] [Channel #5 SubChannel #6]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0806 00:20:20.657766       1 logging.go:55] [core] [Channel #2 SubChannel #4]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0806 00:20:25.957537       1 logging.go:55] [core] [Channel #1 SubChannel #3]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0806 00:20:27.389691       1 logging.go:55] [core] [Channel #5 SubChannel #6]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0806 00:20:28.112411       1 logging.go:55] [core] [Channel #2 SubChannel #4]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	F0806 00:20:30.621421       1 instance.go:225] Error creating leases: error creating storage factory: context deadline exceeded
	
	
	==> kube-apiserver [8253810e0ea0f48b972e287db8b4390562655d245869d92f9d54bd4da04aab33] <==
	I0806 00:20:34.735314       1 shared_informer.go:320] Caches are synced for *generic.policySource[*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicy,*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicyBinding,k8s.io/apiserver/pkg/admission/plugin/policy/validating.Validator]
	I0806 00:20:34.735346       1 policy_source.go:224] refreshing policies
	I0806 00:20:34.737469       1 controller.go:615] quota admission added evaluator for: leases.coordination.k8s.io
	I0806 00:20:34.749641       1 shared_informer.go:320] Caches are synced for cluster_authentication_trust_controller
	I0806 00:20:34.750733       1 cache.go:39] Caches are synced for LocalAvailability controller
	I0806 00:20:34.752727       1 apf_controller.go:382] Running API Priority and Fairness config worker
	I0806 00:20:34.752857       1 apf_controller.go:385] Running API Priority and Fairness periodic rebalancing process
	I0806 00:20:34.754852       1 cache.go:39] Caches are synced for RemoteAvailability controller
	I0806 00:20:34.754985       1 shared_informer.go:320] Caches are synced for configmaps
	I0806 00:20:34.755295       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I0806 00:20:34.761941       1 handler_discovery.go:450] Starting ResourceDiscoveryManager
	I0806 00:20:34.778528       1 shared_informer.go:320] Caches are synced for crd-autoregister
	I0806 00:20:34.791621       1 aggregator.go:171] initial CRD sync complete...
	I0806 00:20:34.794512       1 autoregister_controller.go:144] Starting autoregister controller
	I0806 00:20:34.794683       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I0806 00:20:34.794715       1 cache.go:39] Caches are synced for autoregister controller
	I0806 00:20:34.816210       1 shared_informer.go:320] Caches are synced for node_authorizer
	I0806 00:20:35.668972       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I0806 00:20:36.455498       1 controller.go:615] quota admission added evaluator for: serviceaccounts
	I0806 00:20:36.468934       1 controller.go:615] quota admission added evaluator for: deployments.apps
	I0806 00:20:36.510931       1 controller.go:615] quota admission added evaluator for: daemonsets.apps
	I0806 00:20:36.651666       1 controller.go:615] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I0806 00:20:36.664179       1 controller.go:615] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I0806 00:20:38.062031       1 controller.go:615] quota admission added evaluator for: endpoints
	I0806 00:20:38.360601       1 controller.go:615] quota admission added evaluator for: endpointslices.discovery.k8s.io
	
	
	==> kube-controller-manager [30ec18b502cd8ef8aefd56c1e7176c6fd1428d5cd509c7bf0016bef32bcf8c9a] <==
	I0806 00:20:38.065986       1 shared_informer.go:320] Caches are synced for node
	I0806 00:20:38.066194       1 range_allocator.go:171] "Sending events to api server" logger="node-ipam-controller"
	I0806 00:20:38.066347       1 range_allocator.go:177] "Starting range CIDR allocator" logger="node-ipam-controller"
	I0806 00:20:38.066393       1 shared_informer.go:313] Waiting for caches to sync for cidrallocator
	I0806 00:20:38.066524       1 shared_informer.go:320] Caches are synced for cidrallocator
	I0806 00:20:38.066660       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="kubernetes-upgrade-907863"
	I0806 00:20:38.070241       1 shared_informer.go:320] Caches are synced for taint-eviction-controller
	I0806 00:20:38.106366       1 shared_informer.go:320] Caches are synced for attach detach
	I0806 00:20:38.107393       1 shared_informer.go:320] Caches are synced for TTL
	I0806 00:20:38.108371       1 shared_informer.go:320] Caches are synced for deployment
	I0806 00:20:38.138741       1 shared_informer.go:320] Caches are synced for ReplicaSet
	I0806 00:20:38.142056       1 shared_informer.go:320] Caches are synced for disruption
	I0806 00:20:38.160662       1 shared_informer.go:320] Caches are synced for taint
	I0806 00:20:38.160875       1 node_lifecycle_controller.go:1232] "Initializing eviction metric for zone" logger="node-lifecycle-controller" zone=""
	I0806 00:20:38.160951       1 node_lifecycle_controller.go:884] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="kubernetes-upgrade-907863"
	I0806 00:20:38.160984       1 node_lifecycle_controller.go:1078] "Controller detected that zone is now in new state" logger="node-lifecycle-controller" zone="" newState="Normal"
	I0806 00:20:38.179706       1 shared_informer.go:320] Caches are synced for resource quota
	I0806 00:20:38.213506       1 shared_informer.go:320] Caches are synced for resource quota
	I0806 00:20:38.258201       1 shared_informer.go:320] Caches are synced for service account
	I0806 00:20:38.303624       1 shared_informer.go:320] Caches are synced for namespace
	I0806 00:20:38.475345       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-6f6b679f8f" duration="336.392796ms"
	I0806 00:20:38.475451       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-6f6b679f8f" duration="58.461µs"
	I0806 00:20:38.719061       1 shared_informer.go:320] Caches are synced for garbage collector
	I0806 00:20:38.754991       1 shared_informer.go:320] Caches are synced for garbage collector
	I0806 00:20:38.755034       1 garbagecollector.go:157] "All resource monitors have synced. Proceeding to collect garbage" logger="garbage-collector-controller"
	
	
	==> kube-controller-manager [4720cc8c7f5da78dbd241c36befde0d46687ef4735777fb51af652b93ea8d290] <==
	
	
	==> kube-proxy [01dac4acf9a406b9148068a812b2f09cf3cab420a3e597485b2562f991080357] <==
		add table ip kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	E0806 00:20:35.990914       1 proxier.go:734] "Error cleaning up nftables rules" err=<
		could not run nftables command: /dev/stdin:1:1-25: Error: Could not process rule: Operation not supported
		add table ip6 kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	I0806 00:20:36.001165       1 server.go:677] "Successfully retrieved node IP(s)" IPs=["192.168.72.112"]
	E0806 00:20:36.001391       1 server.go:234] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I0806 00:20:36.058102       1 server_linux.go:146] "No iptables support for family" ipFamily="IPv6"
	I0806 00:20:36.058233       1 server.go:245] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0806 00:20:36.058316       1 server_linux.go:169] "Using iptables Proxier"
	I0806 00:20:36.061432       1 proxier.go:255] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I0806 00:20:36.061723       1 server.go:483] "Version info" version="v1.31.0-rc.0"
	I0806 00:20:36.061877       1 server.go:485] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0806 00:20:36.068504       1 config.go:197] "Starting service config controller"
	I0806 00:20:36.068593       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0806 00:20:36.068673       1 config.go:104] "Starting endpoint slice config controller"
	I0806 00:20:36.068714       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0806 00:20:36.069976       1 config.go:326] "Starting node config controller"
	I0806 00:20:36.070041       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0806 00:20:36.169888       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I0806 00:20:36.170083       1 shared_informer.go:320] Caches are synced for node config
	I0806 00:20:36.170111       1 shared_informer.go:320] Caches are synced for service config
	
	
	==> kube-proxy [53ef65d4855fce877cf9f2e415966fb26c60456cf78eb1381b0fa7cc3e3b65eb] <==
	
	
	==> kube-scheduler [a9415b012078eb3f1e33a013354a90fb6f48039e62f34eb2f2af0cf06486caa1] <==
	I0806 00:20:32.850316       1 serving.go:386] Generated self-signed cert in-memory
	W0806 00:20:34.708629       1 requestheader_controller.go:196] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W0806 00:20:34.708760       1 authentication.go:370] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W0806 00:20:34.708855       1 authentication.go:371] Continuing without authentication configuration. This may treat all requests as anonymous.
	W0806 00:20:34.708887       1 authentication.go:372] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I0806 00:20:34.773609       1 server.go:167] "Starting Kubernetes Scheduler" version="v1.31.0-rc.0"
	I0806 00:20:34.773714       1 server.go:169] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0806 00:20:34.776112       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0806 00:20:34.776213       1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0806 00:20:34.776299       1 secure_serving.go:213] Serving securely on 127.0.0.1:10259
	I0806 00:20:34.776393       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I0806 00:20:34.876455       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kube-scheduler [ad00527cd5c8e92719898aac9febcbb9738228aac487e30f43c3138b056d8adc] <==
	
	
	==> kubelet <==
	Aug 06 00:20:31 kubernetes-upgrade-907863 kubelet[4026]: I0806 00:20:31.833238    4026 kubelet_node_status.go:72] "Attempting to register node" node="kubernetes-upgrade-907863"
	Aug 06 00:20:31 kubernetes-upgrade-907863 kubelet[4026]: E0806 00:20:31.834203    4026 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://control-plane.minikube.internal:8443/api/v1/nodes\": dial tcp 192.168.72.112:8443: connect: connection refused" node="kubernetes-upgrade-907863"
	Aug 06 00:20:31 kubernetes-upgrade-907863 kubelet[4026]: I0806 00:20:31.863290    4026 scope.go:117] "RemoveContainer" containerID="14d65fe8451ef85c31bae4559ca7aa7a9f6bea8038589e437995ada823c1c56b"
	Aug 06 00:20:31 kubernetes-upgrade-907863 kubelet[4026]: I0806 00:20:31.867472    4026 scope.go:117] "RemoveContainer" containerID="ad00527cd5c8e92719898aac9febcbb9738228aac487e30f43c3138b056d8adc"
	Aug 06 00:20:31 kubernetes-upgrade-907863 kubelet[4026]: I0806 00:20:31.875141    4026 scope.go:117] "RemoveContainer" containerID="4720cc8c7f5da78dbd241c36befde0d46687ef4735777fb51af652b93ea8d290"
	Aug 06 00:20:32 kubernetes-upgrade-907863 kubelet[4026]: E0806 00:20:32.030040    4026 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://control-plane.minikube.internal:8443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/kubernetes-upgrade-907863?timeout=10s\": dial tcp 192.168.72.112:8443: connect: connection refused" interval="800ms"
	Aug 06 00:20:32 kubernetes-upgrade-907863 kubelet[4026]: E0806 00:20:32.355091    4026 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1722903632354534413,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:125243,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 06 00:20:32 kubernetes-upgrade-907863 kubelet[4026]: E0806 00:20:32.355124    4026 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1722903632354534413,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:125243,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 06 00:20:32 kubernetes-upgrade-907863 kubelet[4026]: I0806 00:20:32.525945    4026 scope.go:117] "RemoveContainer" containerID="61a0d3fa8339169fc4d3f48e407eaae53ee7a4537d54b1ad6d80635e4e5ced7b"
	Aug 06 00:20:32 kubernetes-upgrade-907863 kubelet[4026]: E0806 00:20:32.831987    4026 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://control-plane.minikube.internal:8443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/kubernetes-upgrade-907863?timeout=10s\": dial tcp 192.168.72.112:8443: connect: connection refused" interval="1.6s"
	Aug 06 00:20:33 kubernetes-upgrade-907863 kubelet[4026]: I0806 00:20:33.435842    4026 kubelet_node_status.go:72] "Attempting to register node" node="kubernetes-upgrade-907863"
	Aug 06 00:20:34 kubernetes-upgrade-907863 kubelet[4026]: I0806 00:20:34.806417    4026 kubelet_node_status.go:111] "Node was previously registered" node="kubernetes-upgrade-907863"
	Aug 06 00:20:34 kubernetes-upgrade-907863 kubelet[4026]: I0806 00:20:34.806883    4026 kubelet_node_status.go:75] "Successfully registered node" node="kubernetes-upgrade-907863"
	Aug 06 00:20:34 kubernetes-upgrade-907863 kubelet[4026]: I0806 00:20:34.806971    4026 kuberuntime_manager.go:1633] "Updating runtime config through cri with podcidr" CIDR="10.244.0.0/24"
	Aug 06 00:20:34 kubernetes-upgrade-907863 kubelet[4026]: I0806 00:20:34.808004    4026 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="10.244.0.0/24"
	Aug 06 00:20:35 kubernetes-upgrade-907863 kubelet[4026]: I0806 00:20:35.252284    4026 apiserver.go:52] "Watching apiserver"
	Aug 06 00:20:35 kubernetes-upgrade-907863 kubelet[4026]: I0806 00:20:35.345070    4026 desired_state_of_world_populator.go:154] "Finished populating initial desired state of world"
	Aug 06 00:20:35 kubernetes-upgrade-907863 kubelet[4026]: I0806 00:20:35.428894    4026 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/d8adf1bf-ff2f-4a63-a674-f1e4f0ac4c3a-xtables-lock\") pod \"kube-proxy-lq9cd\" (UID: \"d8adf1bf-ff2f-4a63-a674-f1e4f0ac4c3a\") " pod="kube-system/kube-proxy-lq9cd"
	Aug 06 00:20:35 kubernetes-upgrade-907863 kubelet[4026]: I0806 00:20:35.429056    4026 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/d8adf1bf-ff2f-4a63-a674-f1e4f0ac4c3a-lib-modules\") pod \"kube-proxy-lq9cd\" (UID: \"d8adf1bf-ff2f-4a63-a674-f1e4f0ac4c3a\") " pod="kube-system/kube-proxy-lq9cd"
	Aug 06 00:20:35 kubernetes-upgrade-907863 kubelet[4026]: I0806 00:20:35.429174    4026 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/host-path/0d37c81c-0a57-4492-8d0a-67be715cc6c1-tmp\") pod \"storage-provisioner\" (UID: \"0d37c81c-0a57-4492-8d0a-67be715cc6c1\") " pod="kube-system/storage-provisioner"
	Aug 06 00:20:35 kubernetes-upgrade-907863 kubelet[4026]: I0806 00:20:35.560136    4026 scope.go:117] "RemoveContainer" containerID="bd68bc943020cfc45efe4c7bf896c3fbfdeaf49947a1851ec5050b57f5349b33"
	Aug 06 00:20:35 kubernetes-upgrade-907863 kubelet[4026]: I0806 00:20:35.560743    4026 scope.go:117] "RemoveContainer" containerID="53ef65d4855fce877cf9f2e415966fb26c60456cf78eb1381b0fa7cc3e3b65eb"
	Aug 06 00:20:35 kubernetes-upgrade-907863 kubelet[4026]: I0806 00:20:35.561186    4026 scope.go:117] "RemoveContainer" containerID="4dcd434757810bfc1a43b48b540bad35bb2daa0a129cc91f3c8f8f5c8a840d3f"
	Aug 06 00:20:35 kubernetes-upgrade-907863 kubelet[4026]: I0806 00:20:35.561731    4026 scope.go:117] "RemoveContainer" containerID="b72493e493bbbfb09237539118d80154b3a2cae34285d46239d72cdbf3e6be78"
	Aug 06 00:20:35 kubernetes-upgrade-907863 kubelet[4026]: E0806 00:20:35.608005    4026 kubelet.go:1915] "Failed creating a mirror pod for" err="pods \"kube-apiserver-kubernetes-upgrade-907863\" already exists" pod="kube-system/kube-apiserver-kubernetes-upgrade-907863"
	
	
	==> storage-provisioner [0349f4112fbfbdaa8af5c3e28976475ade87ee8c8c60064b48aa7ea8fa7cfac6] <==
	I0806 00:20:35.746692       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0806 00:20:35.800268       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0806 00:20:35.800324       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	
	
	==> storage-provisioner [bd68bc943020cfc45efe4c7bf896c3fbfdeaf49947a1851ec5050b57f5349b33] <==
	I0806 00:20:09.070705       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	F0806 00:20:09.073585       1 main.go:39] error getting server version: Get "https://10.96.0.1:443/version?timeout=32s": dial tcp 10.96.0.1:443: connect: connection refused
	

                                                
                                                
-- /stdout --
** stderr ** 
	E0806 00:20:38.091530   66570 logs.go:258] failed to output last start logs: failed to read file /home/jenkins/minikube-integration/19373-9606/.minikube/logs/lastStart.txt: bufio.Scanner: token too long

                                                
                                                
** /stderr **
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p kubernetes-upgrade-907863 -n kubernetes-upgrade-907863
helpers_test.go:261: (dbg) Run:  kubectl --context kubernetes-upgrade-907863 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestKubernetesUpgrade FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
helpers_test.go:175: Cleaning up "kubernetes-upgrade-907863" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p kubernetes-upgrade-907863
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p kubernetes-upgrade-907863: (1.112842421s)
--- FAIL: TestKubernetesUpgrade (389.35s)

                                                
                                    
x
+
TestPause/serial/SecondStartNoReconfiguration (56.57s)

                                                
                                                
=== RUN   TestPause/serial/SecondStartNoReconfiguration
pause_test.go:92: (dbg) Run:  out/minikube-linux-amd64 start -p pause-161508 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio
pause_test.go:92: (dbg) Done: out/minikube-linux-amd64 start -p pause-161508 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio: (52.327048053s)
pause_test.go:100: expected the second start log output to include "The running cluster does not require reconfiguration" but got: 
-- stdout --
	* [pause-161508] minikube v1.33.1 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=19373
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/19373-9606/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/19373-9606/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the kvm2 driver based on existing profile
	* Starting "pause-161508" primary control-plane node in "pause-161508" cluster
	* Updating the running kvm2 "pause-161508" VM ...
	* Preparing Kubernetes v1.30.3 on CRI-O 1.29.1 ...
	* Configuring bridge CNI (Container Networking Interface) ...
	* Enabled addons: 
	* Verifying Kubernetes components...
	* Done! kubectl is now configured to use "pause-161508" cluster and "default" namespace by default

                                                
                                                
-- /stdout --
** stderr ** 
	I0806 00:14:41.788859   62044 out.go:291] Setting OutFile to fd 1 ...
	I0806 00:14:41.789067   62044 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0806 00:14:41.789076   62044 out.go:304] Setting ErrFile to fd 2...
	I0806 00:14:41.789082   62044 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0806 00:14:41.789405   62044 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19373-9606/.minikube/bin
	I0806 00:14:41.790212   62044 out.go:298] Setting JSON to false
	I0806 00:14:41.791571   62044 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-10","uptime":7028,"bootTime":1722896254,"procs":219,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1065-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0806 00:14:41.791653   62044 start.go:139] virtualization: kvm guest
	I0806 00:14:41.794323   62044 out.go:177] * [pause-161508] minikube v1.33.1 on Ubuntu 20.04 (kvm/amd64)
	I0806 00:14:41.796007   62044 notify.go:220] Checking for updates...
	I0806 00:14:41.796223   62044 out.go:177]   - MINIKUBE_LOCATION=19373
	I0806 00:14:41.797754   62044 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0806 00:14:41.799334   62044 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19373-9606/kubeconfig
	I0806 00:14:41.800925   62044 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19373-9606/.minikube
	I0806 00:14:41.802432   62044 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0806 00:14:41.803962   62044 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0806 00:14:41.806112   62044 config.go:182] Loaded profile config "pause-161508": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0806 00:14:41.806748   62044 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0806 00:14:41.806823   62044 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0806 00:14:41.823009   62044 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34723
	I0806 00:14:41.823404   62044 main.go:141] libmachine: () Calling .GetVersion
	I0806 00:14:41.823984   62044 main.go:141] libmachine: Using API Version  1
	I0806 00:14:41.824004   62044 main.go:141] libmachine: () Calling .SetConfigRaw
	I0806 00:14:41.824347   62044 main.go:141] libmachine: () Calling .GetMachineName
	I0806 00:14:41.824557   62044 main.go:141] libmachine: (pause-161508) Calling .DriverName
	I0806 00:14:41.824814   62044 driver.go:392] Setting default libvirt URI to qemu:///system
	I0806 00:14:41.825101   62044 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0806 00:14:41.825150   62044 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0806 00:14:41.840315   62044 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33889
	I0806 00:14:41.840724   62044 main.go:141] libmachine: () Calling .GetVersion
	I0806 00:14:41.841285   62044 main.go:141] libmachine: Using API Version  1
	I0806 00:14:41.841322   62044 main.go:141] libmachine: () Calling .SetConfigRaw
	I0806 00:14:41.841604   62044 main.go:141] libmachine: () Calling .GetMachineName
	I0806 00:14:41.841830   62044 main.go:141] libmachine: (pause-161508) Calling .DriverName
	I0806 00:14:41.884052   62044 out.go:177] * Using the kvm2 driver based on existing profile
	I0806 00:14:41.885496   62044 start.go:297] selected driver: kvm2
	I0806 00:14:41.885512   62044 start.go:901] validating driver "kvm2" against &{Name:pause-161508 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19339/minikube-v1.33.1-1722248113-19339-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2048 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kubernetes
Version:v1.30.3 ClusterName:pause-161508 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.118 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-dev
ice-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0806 00:14:41.885667   62044 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0806 00:14:41.886046   62044 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0806 00:14:41.886128   62044 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/19373-9606/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0806 00:14:41.902326   62044 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.33.1
	I0806 00:14:41.903303   62044 cni.go:84] Creating CNI manager for ""
	I0806 00:14:41.903321   62044 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0806 00:14:41.903384   62044 start.go:340] cluster config:
	{Name:pause-161508 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19339/minikube-v1.33.1-1722248113-19339-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2048 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.3 ClusterName:pause-161508 Namespace:default APIServerHAVIP: API
ServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.118 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:
false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0806 00:14:41.903517   62044 iso.go:125] acquiring lock: {Name:mk54a637ed625e04bb2b6adf973b61c976cd6d35 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0806 00:14:41.905498   62044 out.go:177] * Starting "pause-161508" primary control-plane node in "pause-161508" cluster
	I0806 00:14:41.907009   62044 preload.go:131] Checking if preload exists for k8s version v1.30.3 and runtime crio
	I0806 00:14:41.907079   62044 preload.go:146] Found local preload: /home/jenkins/minikube-integration/19373-9606/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-cri-o-overlay-amd64.tar.lz4
	I0806 00:14:41.907114   62044 cache.go:56] Caching tarball of preloaded images
	I0806 00:14:41.907239   62044 preload.go:172] Found /home/jenkins/minikube-integration/19373-9606/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0806 00:14:41.907254   62044 cache.go:59] Finished verifying existence of preloaded tar for v1.30.3 on crio
	I0806 00:14:41.907414   62044 profile.go:143] Saving config to /home/jenkins/minikube-integration/19373-9606/.minikube/profiles/pause-161508/config.json ...
	I0806 00:14:41.907676   62044 start.go:360] acquireMachinesLock for pause-161508: {Name:mkd2ba511c39504598222edbf83078b718329186 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0806 00:14:46.343989   62044 start.go:364] duration metric: took 4.436280144s to acquireMachinesLock for "pause-161508"
	I0806 00:14:46.344061   62044 start.go:96] Skipping create...Using existing machine configuration
	I0806 00:14:46.344071   62044 fix.go:54] fixHost starting: 
	I0806 00:14:46.344446   62044 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0806 00:14:46.344487   62044 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0806 00:14:46.362745   62044 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40551
	I0806 00:14:46.363336   62044 main.go:141] libmachine: () Calling .GetVersion
	I0806 00:14:46.363829   62044 main.go:141] libmachine: Using API Version  1
	I0806 00:14:46.363852   62044 main.go:141] libmachine: () Calling .SetConfigRaw
	I0806 00:14:46.364173   62044 main.go:141] libmachine: () Calling .GetMachineName
	I0806 00:14:46.364357   62044 main.go:141] libmachine: (pause-161508) Calling .DriverName
	I0806 00:14:46.364503   62044 main.go:141] libmachine: (pause-161508) Calling .GetState
	I0806 00:14:46.366068   62044 fix.go:112] recreateIfNeeded on pause-161508: state=Running err=<nil>
	W0806 00:14:46.366086   62044 fix.go:138] unexpected machine state, will restart: <nil>
	I0806 00:14:46.368433   62044 out.go:177] * Updating the running kvm2 "pause-161508" VM ...
	I0806 00:14:46.369648   62044 machine.go:94] provisionDockerMachine start ...
	I0806 00:14:46.369671   62044 main.go:141] libmachine: (pause-161508) Calling .DriverName
	I0806 00:14:46.369843   62044 main.go:141] libmachine: (pause-161508) Calling .GetSSHHostname
	I0806 00:14:46.373341   62044 main.go:141] libmachine: (pause-161508) DBG | domain pause-161508 has defined MAC address 52:54:00:33:8e:29 in network mk-pause-161508
	I0806 00:14:46.373936   62044 main.go:141] libmachine: (pause-161508) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:33:8e:29", ip: ""} in network mk-pause-161508: {Iface:virbr3 ExpiryTime:2024-08-06 01:13:19 +0000 UTC Type:0 Mac:52:54:00:33:8e:29 Iaid: IPaddr:192.168.39.118 Prefix:24 Hostname:pause-161508 Clientid:01:52:54:00:33:8e:29}
	I0806 00:14:46.373958   62044 main.go:141] libmachine: (pause-161508) DBG | domain pause-161508 has defined IP address 192.168.39.118 and MAC address 52:54:00:33:8e:29 in network mk-pause-161508
	I0806 00:14:46.374181   62044 main.go:141] libmachine: (pause-161508) Calling .GetSSHPort
	I0806 00:14:46.374359   62044 main.go:141] libmachine: (pause-161508) Calling .GetSSHKeyPath
	I0806 00:14:46.374515   62044 main.go:141] libmachine: (pause-161508) Calling .GetSSHKeyPath
	I0806 00:14:46.374633   62044 main.go:141] libmachine: (pause-161508) Calling .GetSSHUsername
	I0806 00:14:46.374836   62044 main.go:141] libmachine: Using SSH client type: native
	I0806 00:14:46.375099   62044 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.118 22 <nil> <nil>}
	I0806 00:14:46.375113   62044 main.go:141] libmachine: About to run SSH command:
	hostname
	I0806 00:14:46.484502   62044 main.go:141] libmachine: SSH cmd err, output: <nil>: pause-161508
	
	I0806 00:14:46.484534   62044 main.go:141] libmachine: (pause-161508) Calling .GetMachineName
	I0806 00:14:46.484794   62044 buildroot.go:166] provisioning hostname "pause-161508"
	I0806 00:14:46.484826   62044 main.go:141] libmachine: (pause-161508) Calling .GetMachineName
	I0806 00:14:46.485005   62044 main.go:141] libmachine: (pause-161508) Calling .GetSSHHostname
	I0806 00:14:46.488282   62044 main.go:141] libmachine: (pause-161508) DBG | domain pause-161508 has defined MAC address 52:54:00:33:8e:29 in network mk-pause-161508
	I0806 00:14:46.488626   62044 main.go:141] libmachine: (pause-161508) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:33:8e:29", ip: ""} in network mk-pause-161508: {Iface:virbr3 ExpiryTime:2024-08-06 01:13:19 +0000 UTC Type:0 Mac:52:54:00:33:8e:29 Iaid: IPaddr:192.168.39.118 Prefix:24 Hostname:pause-161508 Clientid:01:52:54:00:33:8e:29}
	I0806 00:14:46.488661   62044 main.go:141] libmachine: (pause-161508) DBG | domain pause-161508 has defined IP address 192.168.39.118 and MAC address 52:54:00:33:8e:29 in network mk-pause-161508
	I0806 00:14:46.488775   62044 main.go:141] libmachine: (pause-161508) Calling .GetSSHPort
	I0806 00:14:46.488956   62044 main.go:141] libmachine: (pause-161508) Calling .GetSSHKeyPath
	I0806 00:14:46.489138   62044 main.go:141] libmachine: (pause-161508) Calling .GetSSHKeyPath
	I0806 00:14:46.489280   62044 main.go:141] libmachine: (pause-161508) Calling .GetSSHUsername
	I0806 00:14:46.489470   62044 main.go:141] libmachine: Using SSH client type: native
	I0806 00:14:46.489668   62044 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.118 22 <nil> <nil>}
	I0806 00:14:46.489687   62044 main.go:141] libmachine: About to run SSH command:
	sudo hostname pause-161508 && echo "pause-161508" | sudo tee /etc/hostname
	I0806 00:14:46.627230   62044 main.go:141] libmachine: SSH cmd err, output: <nil>: pause-161508
	
	I0806 00:14:46.627262   62044 main.go:141] libmachine: (pause-161508) Calling .GetSSHHostname
	I0806 00:14:46.630624   62044 main.go:141] libmachine: (pause-161508) DBG | domain pause-161508 has defined MAC address 52:54:00:33:8e:29 in network mk-pause-161508
	I0806 00:14:46.631014   62044 main.go:141] libmachine: (pause-161508) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:33:8e:29", ip: ""} in network mk-pause-161508: {Iface:virbr3 ExpiryTime:2024-08-06 01:13:19 +0000 UTC Type:0 Mac:52:54:00:33:8e:29 Iaid: IPaddr:192.168.39.118 Prefix:24 Hostname:pause-161508 Clientid:01:52:54:00:33:8e:29}
	I0806 00:14:46.631044   62044 main.go:141] libmachine: (pause-161508) DBG | domain pause-161508 has defined IP address 192.168.39.118 and MAC address 52:54:00:33:8e:29 in network mk-pause-161508
	I0806 00:14:46.631467   62044 main.go:141] libmachine: (pause-161508) Calling .GetSSHPort
	I0806 00:14:46.631646   62044 main.go:141] libmachine: (pause-161508) Calling .GetSSHKeyPath
	I0806 00:14:46.631821   62044 main.go:141] libmachine: (pause-161508) Calling .GetSSHKeyPath
	I0806 00:14:46.632028   62044 main.go:141] libmachine: (pause-161508) Calling .GetSSHUsername
	I0806 00:14:46.632193   62044 main.go:141] libmachine: Using SSH client type: native
	I0806 00:14:46.632425   62044 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.118 22 <nil> <nil>}
	I0806 00:14:46.632449   62044 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\spause-161508' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 pause-161508/g' /etc/hosts;
				else 
					echo '127.0.1.1 pause-161508' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0806 00:14:46.752560   62044 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0806 00:14:46.752602   62044 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19373-9606/.minikube CaCertPath:/home/jenkins/minikube-integration/19373-9606/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19373-9606/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19373-9606/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19373-9606/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19373-9606/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19373-9606/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19373-9606/.minikube}
	I0806 00:14:46.752646   62044 buildroot.go:174] setting up certificates
	I0806 00:14:46.752658   62044 provision.go:84] configureAuth start
	I0806 00:14:46.752672   62044 main.go:141] libmachine: (pause-161508) Calling .GetMachineName
	I0806 00:14:46.752976   62044 main.go:141] libmachine: (pause-161508) Calling .GetIP
	I0806 00:14:46.755947   62044 main.go:141] libmachine: (pause-161508) DBG | domain pause-161508 has defined MAC address 52:54:00:33:8e:29 in network mk-pause-161508
	I0806 00:14:46.756352   62044 main.go:141] libmachine: (pause-161508) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:33:8e:29", ip: ""} in network mk-pause-161508: {Iface:virbr3 ExpiryTime:2024-08-06 01:13:19 +0000 UTC Type:0 Mac:52:54:00:33:8e:29 Iaid: IPaddr:192.168.39.118 Prefix:24 Hostname:pause-161508 Clientid:01:52:54:00:33:8e:29}
	I0806 00:14:46.756377   62044 main.go:141] libmachine: (pause-161508) DBG | domain pause-161508 has defined IP address 192.168.39.118 and MAC address 52:54:00:33:8e:29 in network mk-pause-161508
	I0806 00:14:46.756591   62044 main.go:141] libmachine: (pause-161508) Calling .GetSSHHostname
	I0806 00:14:46.759702   62044 main.go:141] libmachine: (pause-161508) DBG | domain pause-161508 has defined MAC address 52:54:00:33:8e:29 in network mk-pause-161508
	I0806 00:14:46.760112   62044 main.go:141] libmachine: (pause-161508) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:33:8e:29", ip: ""} in network mk-pause-161508: {Iface:virbr3 ExpiryTime:2024-08-06 01:13:19 +0000 UTC Type:0 Mac:52:54:00:33:8e:29 Iaid: IPaddr:192.168.39.118 Prefix:24 Hostname:pause-161508 Clientid:01:52:54:00:33:8e:29}
	I0806 00:14:46.760141   62044 main.go:141] libmachine: (pause-161508) DBG | domain pause-161508 has defined IP address 192.168.39.118 and MAC address 52:54:00:33:8e:29 in network mk-pause-161508
	I0806 00:14:46.760359   62044 provision.go:143] copyHostCerts
	I0806 00:14:46.760426   62044 exec_runner.go:144] found /home/jenkins/minikube-integration/19373-9606/.minikube/ca.pem, removing ...
	I0806 00:14:46.760437   62044 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19373-9606/.minikube/ca.pem
	I0806 00:14:46.760495   62044 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19373-9606/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19373-9606/.minikube/ca.pem (1082 bytes)
	I0806 00:14:46.760592   62044 exec_runner.go:144] found /home/jenkins/minikube-integration/19373-9606/.minikube/cert.pem, removing ...
	I0806 00:14:46.760601   62044 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19373-9606/.minikube/cert.pem
	I0806 00:14:46.760625   62044 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19373-9606/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19373-9606/.minikube/cert.pem (1123 bytes)
	I0806 00:14:46.760711   62044 exec_runner.go:144] found /home/jenkins/minikube-integration/19373-9606/.minikube/key.pem, removing ...
	I0806 00:14:46.760720   62044 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19373-9606/.minikube/key.pem
	I0806 00:14:46.760739   62044 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19373-9606/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19373-9606/.minikube/key.pem (1679 bytes)
	I0806 00:14:46.760810   62044 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19373-9606/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19373-9606/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19373-9606/.minikube/certs/ca-key.pem org=jenkins.pause-161508 san=[127.0.0.1 192.168.39.118 localhost minikube pause-161508]
	I0806 00:14:46.982836   62044 provision.go:177] copyRemoteCerts
	I0806 00:14:46.982898   62044 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0806 00:14:46.982922   62044 main.go:141] libmachine: (pause-161508) Calling .GetSSHHostname
	I0806 00:14:46.985958   62044 main.go:141] libmachine: (pause-161508) DBG | domain pause-161508 has defined MAC address 52:54:00:33:8e:29 in network mk-pause-161508
	I0806 00:14:46.986362   62044 main.go:141] libmachine: (pause-161508) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:33:8e:29", ip: ""} in network mk-pause-161508: {Iface:virbr3 ExpiryTime:2024-08-06 01:13:19 +0000 UTC Type:0 Mac:52:54:00:33:8e:29 Iaid: IPaddr:192.168.39.118 Prefix:24 Hostname:pause-161508 Clientid:01:52:54:00:33:8e:29}
	I0806 00:14:46.986394   62044 main.go:141] libmachine: (pause-161508) DBG | domain pause-161508 has defined IP address 192.168.39.118 and MAC address 52:54:00:33:8e:29 in network mk-pause-161508
	I0806 00:14:46.986557   62044 main.go:141] libmachine: (pause-161508) Calling .GetSSHPort
	I0806 00:14:46.986805   62044 main.go:141] libmachine: (pause-161508) Calling .GetSSHKeyPath
	I0806 00:14:46.986991   62044 main.go:141] libmachine: (pause-161508) Calling .GetSSHUsername
	I0806 00:14:46.987143   62044 sshutil.go:53] new ssh client: &{IP:192.168.39.118 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19373-9606/.minikube/machines/pause-161508/id_rsa Username:docker}
	I0806 00:14:47.070447   62044 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19373-9606/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0806 00:14:47.109142   62044 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19373-9606/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I0806 00:14:47.137301   62044 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19373-9606/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0806 00:14:47.164786   62044 provision.go:87] duration metric: took 412.109539ms to configureAuth
	I0806 00:14:47.164817   62044 buildroot.go:189] setting minikube options for container-runtime
	I0806 00:14:47.165068   62044 config.go:182] Loaded profile config "pause-161508": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0806 00:14:47.165146   62044 main.go:141] libmachine: (pause-161508) Calling .GetSSHHostname
	I0806 00:14:47.168216   62044 main.go:141] libmachine: (pause-161508) DBG | domain pause-161508 has defined MAC address 52:54:00:33:8e:29 in network mk-pause-161508
	I0806 00:14:47.168569   62044 main.go:141] libmachine: (pause-161508) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:33:8e:29", ip: ""} in network mk-pause-161508: {Iface:virbr3 ExpiryTime:2024-08-06 01:13:19 +0000 UTC Type:0 Mac:52:54:00:33:8e:29 Iaid: IPaddr:192.168.39.118 Prefix:24 Hostname:pause-161508 Clientid:01:52:54:00:33:8e:29}
	I0806 00:14:47.168631   62044 main.go:141] libmachine: (pause-161508) DBG | domain pause-161508 has defined IP address 192.168.39.118 and MAC address 52:54:00:33:8e:29 in network mk-pause-161508
	I0806 00:14:47.168795   62044 main.go:141] libmachine: (pause-161508) Calling .GetSSHPort
	I0806 00:14:47.169007   62044 main.go:141] libmachine: (pause-161508) Calling .GetSSHKeyPath
	I0806 00:14:47.169210   62044 main.go:141] libmachine: (pause-161508) Calling .GetSSHKeyPath
	I0806 00:14:47.169368   62044 main.go:141] libmachine: (pause-161508) Calling .GetSSHUsername
	I0806 00:14:47.169555   62044 main.go:141] libmachine: Using SSH client type: native
	I0806 00:14:47.169746   62044 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.118 22 <nil> <nil>}
	I0806 00:14:47.169767   62044 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0806 00:14:52.768207   62044 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0806 00:14:52.768237   62044 machine.go:97] duration metric: took 6.398573772s to provisionDockerMachine
	I0806 00:14:52.768252   62044 start.go:293] postStartSetup for "pause-161508" (driver="kvm2")
	I0806 00:14:52.768266   62044 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0806 00:14:52.768286   62044 main.go:141] libmachine: (pause-161508) Calling .DriverName
	I0806 00:14:52.768771   62044 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0806 00:14:52.768800   62044 main.go:141] libmachine: (pause-161508) Calling .GetSSHHostname
	I0806 00:14:52.772553   62044 main.go:141] libmachine: (pause-161508) DBG | domain pause-161508 has defined MAC address 52:54:00:33:8e:29 in network mk-pause-161508
	I0806 00:14:52.773026   62044 main.go:141] libmachine: (pause-161508) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:33:8e:29", ip: ""} in network mk-pause-161508: {Iface:virbr3 ExpiryTime:2024-08-06 01:13:19 +0000 UTC Type:0 Mac:52:54:00:33:8e:29 Iaid: IPaddr:192.168.39.118 Prefix:24 Hostname:pause-161508 Clientid:01:52:54:00:33:8e:29}
	I0806 00:14:52.773057   62044 main.go:141] libmachine: (pause-161508) DBG | domain pause-161508 has defined IP address 192.168.39.118 and MAC address 52:54:00:33:8e:29 in network mk-pause-161508
	I0806 00:14:52.773385   62044 main.go:141] libmachine: (pause-161508) Calling .GetSSHPort
	I0806 00:14:52.773599   62044 main.go:141] libmachine: (pause-161508) Calling .GetSSHKeyPath
	I0806 00:14:52.773756   62044 main.go:141] libmachine: (pause-161508) Calling .GetSSHUsername
	I0806 00:14:52.774022   62044 sshutil.go:53] new ssh client: &{IP:192.168.39.118 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19373-9606/.minikube/machines/pause-161508/id_rsa Username:docker}
	I0806 00:14:52.858402   62044 ssh_runner.go:195] Run: cat /etc/os-release
	I0806 00:14:52.864471   62044 info.go:137] Remote host: Buildroot 2023.02.9
	I0806 00:14:52.864505   62044 filesync.go:126] Scanning /home/jenkins/minikube-integration/19373-9606/.minikube/addons for local assets ...
	I0806 00:14:52.864570   62044 filesync.go:126] Scanning /home/jenkins/minikube-integration/19373-9606/.minikube/files for local assets ...
	I0806 00:14:52.864674   62044 filesync.go:149] local asset: /home/jenkins/minikube-integration/19373-9606/.minikube/files/etc/ssl/certs/167922.pem -> 167922.pem in /etc/ssl/certs
	I0806 00:14:52.864774   62044 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0806 00:14:52.875107   62044 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19373-9606/.minikube/files/etc/ssl/certs/167922.pem --> /etc/ssl/certs/167922.pem (1708 bytes)
	I0806 00:14:52.902994   62044 start.go:296] duration metric: took 134.72929ms for postStartSetup
	I0806 00:14:52.903034   62044 fix.go:56] duration metric: took 6.558964017s for fixHost
	I0806 00:14:52.903069   62044 main.go:141] libmachine: (pause-161508) Calling .GetSSHHostname
	I0806 00:14:52.905787   62044 main.go:141] libmachine: (pause-161508) DBG | domain pause-161508 has defined MAC address 52:54:00:33:8e:29 in network mk-pause-161508
	I0806 00:14:52.906163   62044 main.go:141] libmachine: (pause-161508) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:33:8e:29", ip: ""} in network mk-pause-161508: {Iface:virbr3 ExpiryTime:2024-08-06 01:13:19 +0000 UTC Type:0 Mac:52:54:00:33:8e:29 Iaid: IPaddr:192.168.39.118 Prefix:24 Hostname:pause-161508 Clientid:01:52:54:00:33:8e:29}
	I0806 00:14:52.906193   62044 main.go:141] libmachine: (pause-161508) DBG | domain pause-161508 has defined IP address 192.168.39.118 and MAC address 52:54:00:33:8e:29 in network mk-pause-161508
	I0806 00:14:52.906354   62044 main.go:141] libmachine: (pause-161508) Calling .GetSSHPort
	I0806 00:14:52.906552   62044 main.go:141] libmachine: (pause-161508) Calling .GetSSHKeyPath
	I0806 00:14:52.906724   62044 main.go:141] libmachine: (pause-161508) Calling .GetSSHKeyPath
	I0806 00:14:52.906870   62044 main.go:141] libmachine: (pause-161508) Calling .GetSSHUsername
	I0806 00:14:52.907046   62044 main.go:141] libmachine: Using SSH client type: native
	I0806 00:14:52.907265   62044 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.118 22 <nil> <nil>}
	I0806 00:14:52.907276   62044 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0806 00:14:53.013802   62044 main.go:141] libmachine: SSH cmd err, output: <nil>: 1722903293.008614911
	
	I0806 00:14:53.013833   62044 fix.go:216] guest clock: 1722903293.008614911
	I0806 00:14:53.013843   62044 fix.go:229] Guest: 2024-08-06 00:14:53.008614911 +0000 UTC Remote: 2024-08-06 00:14:52.903038034 +0000 UTC m=+11.159767359 (delta=105.576877ms)
	I0806 00:14:53.013868   62044 fix.go:200] guest clock delta is within tolerance: 105.576877ms
	I0806 00:14:53.013875   62044 start.go:83] releasing machines lock for "pause-161508", held for 6.669834897s
	I0806 00:14:53.013902   62044 main.go:141] libmachine: (pause-161508) Calling .DriverName
	I0806 00:14:53.014197   62044 main.go:141] libmachine: (pause-161508) Calling .GetIP
	I0806 00:14:53.017386   62044 main.go:141] libmachine: (pause-161508) DBG | domain pause-161508 has defined MAC address 52:54:00:33:8e:29 in network mk-pause-161508
	I0806 00:14:53.017783   62044 main.go:141] libmachine: (pause-161508) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:33:8e:29", ip: ""} in network mk-pause-161508: {Iface:virbr3 ExpiryTime:2024-08-06 01:13:19 +0000 UTC Type:0 Mac:52:54:00:33:8e:29 Iaid: IPaddr:192.168.39.118 Prefix:24 Hostname:pause-161508 Clientid:01:52:54:00:33:8e:29}
	I0806 00:14:53.017818   62044 main.go:141] libmachine: (pause-161508) DBG | domain pause-161508 has defined IP address 192.168.39.118 and MAC address 52:54:00:33:8e:29 in network mk-pause-161508
	I0806 00:14:53.017973   62044 main.go:141] libmachine: (pause-161508) Calling .DriverName
	I0806 00:14:53.018597   62044 main.go:141] libmachine: (pause-161508) Calling .DriverName
	I0806 00:14:53.018807   62044 main.go:141] libmachine: (pause-161508) Calling .DriverName
	I0806 00:14:53.018919   62044 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0806 00:14:53.018957   62044 main.go:141] libmachine: (pause-161508) Calling .GetSSHHostname
	I0806 00:14:53.018977   62044 ssh_runner.go:195] Run: cat /version.json
	I0806 00:14:53.019001   62044 main.go:141] libmachine: (pause-161508) Calling .GetSSHHostname
	I0806 00:14:53.021792   62044 main.go:141] libmachine: (pause-161508) DBG | domain pause-161508 has defined MAC address 52:54:00:33:8e:29 in network mk-pause-161508
	I0806 00:14:53.022186   62044 main.go:141] libmachine: (pause-161508) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:33:8e:29", ip: ""} in network mk-pause-161508: {Iface:virbr3 ExpiryTime:2024-08-06 01:13:19 +0000 UTC Type:0 Mac:52:54:00:33:8e:29 Iaid: IPaddr:192.168.39.118 Prefix:24 Hostname:pause-161508 Clientid:01:52:54:00:33:8e:29}
	I0806 00:14:53.022211   62044 main.go:141] libmachine: (pause-161508) DBG | domain pause-161508 has defined IP address 192.168.39.118 and MAC address 52:54:00:33:8e:29 in network mk-pause-161508
	I0806 00:14:53.022232   62044 main.go:141] libmachine: (pause-161508) DBG | domain pause-161508 has defined MAC address 52:54:00:33:8e:29 in network mk-pause-161508
	I0806 00:14:53.022456   62044 main.go:141] libmachine: (pause-161508) Calling .GetSSHPort
	I0806 00:14:53.022705   62044 main.go:141] libmachine: (pause-161508) Calling .GetSSHKeyPath
	I0806 00:14:53.022735   62044 main.go:141] libmachine: (pause-161508) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:33:8e:29", ip: ""} in network mk-pause-161508: {Iface:virbr3 ExpiryTime:2024-08-06 01:13:19 +0000 UTC Type:0 Mac:52:54:00:33:8e:29 Iaid: IPaddr:192.168.39.118 Prefix:24 Hostname:pause-161508 Clientid:01:52:54:00:33:8e:29}
	I0806 00:14:53.022761   62044 main.go:141] libmachine: (pause-161508) DBG | domain pause-161508 has defined IP address 192.168.39.118 and MAC address 52:54:00:33:8e:29 in network mk-pause-161508
	I0806 00:14:53.022922   62044 main.go:141] libmachine: (pause-161508) Calling .GetSSHUsername
	I0806 00:14:53.022980   62044 main.go:141] libmachine: (pause-161508) Calling .GetSSHPort
	I0806 00:14:53.023161   62044 sshutil.go:53] new ssh client: &{IP:192.168.39.118 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19373-9606/.minikube/machines/pause-161508/id_rsa Username:docker}
	I0806 00:14:53.023305   62044 main.go:141] libmachine: (pause-161508) Calling .GetSSHKeyPath
	I0806 00:14:53.023458   62044 main.go:141] libmachine: (pause-161508) Calling .GetSSHUsername
	I0806 00:14:53.023609   62044 sshutil.go:53] new ssh client: &{IP:192.168.39.118 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19373-9606/.minikube/machines/pause-161508/id_rsa Username:docker}
	I0806 00:14:53.100662   62044 ssh_runner.go:195] Run: systemctl --version
	I0806 00:14:53.127867   62044 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0806 00:14:53.286775   62044 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0806 00:14:53.298035   62044 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0806 00:14:53.298114   62044 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0806 00:14:53.310993   62044 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I0806 00:14:53.311022   62044 start.go:495] detecting cgroup driver to use...
	I0806 00:14:53.311132   62044 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0806 00:14:53.334023   62044 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0806 00:14:53.356468   62044 docker.go:217] disabling cri-docker service (if available) ...
	I0806 00:14:53.356540   62044 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0806 00:14:53.376425   62044 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0806 00:14:53.395643   62044 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0806 00:14:53.562361   62044 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0806 00:14:53.750649   62044 docker.go:233] disabling docker service ...
	I0806 00:14:53.750739   62044 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0806 00:14:53.770975   62044 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0806 00:14:53.787945   62044 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0806 00:14:53.968997   62044 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0806 00:14:54.130928   62044 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0806 00:14:54.149110   62044 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0806 00:14:54.171520   62044 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I0806 00:14:54.171594   62044 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0806 00:14:54.184680   62044 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0806 00:14:54.184743   62044 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0806 00:14:54.197904   62044 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0806 00:14:54.210910   62044 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0806 00:14:54.225671   62044 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0806 00:14:54.238316   62044 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0806 00:14:54.251342   62044 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0806 00:14:54.263693   62044 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0806 00:14:54.275743   62044 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0806 00:14:54.286930   62044 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0806 00:14:54.298370   62044 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0806 00:14:54.444776   62044 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0806 00:14:55.565977   62044 ssh_runner.go:235] Completed: sudo systemctl restart crio: (1.121158698s)
	I0806 00:14:55.566012   62044 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0806 00:14:55.566063   62044 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0806 00:14:55.572805   62044 start.go:563] Will wait 60s for crictl version
	I0806 00:14:55.572876   62044 ssh_runner.go:195] Run: which crictl
	I0806 00:14:55.578202   62044 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0806 00:14:55.634895   62044 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0806 00:14:55.634995   62044 ssh_runner.go:195] Run: crio --version
	I0806 00:14:55.675524   62044 ssh_runner.go:195] Run: crio --version
	I0806 00:14:55.718988   62044 out.go:177] * Preparing Kubernetes v1.30.3 on CRI-O 1.29.1 ...
	I0806 00:14:55.720363   62044 main.go:141] libmachine: (pause-161508) Calling .GetIP
	I0806 00:14:55.723606   62044 main.go:141] libmachine: (pause-161508) DBG | domain pause-161508 has defined MAC address 52:54:00:33:8e:29 in network mk-pause-161508
	I0806 00:14:55.723915   62044 main.go:141] libmachine: (pause-161508) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:33:8e:29", ip: ""} in network mk-pause-161508: {Iface:virbr3 ExpiryTime:2024-08-06 01:13:19 +0000 UTC Type:0 Mac:52:54:00:33:8e:29 Iaid: IPaddr:192.168.39.118 Prefix:24 Hostname:pause-161508 Clientid:01:52:54:00:33:8e:29}
	I0806 00:14:55.723945   62044 main.go:141] libmachine: (pause-161508) DBG | domain pause-161508 has defined IP address 192.168.39.118 and MAC address 52:54:00:33:8e:29 in network mk-pause-161508
	I0806 00:14:55.724210   62044 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0806 00:14:55.730905   62044 kubeadm.go:883] updating cluster {Name:pause-161508 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19339/minikube-v1.33.1-1722248113-19339-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2048 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.3
ClusterName:pause-161508 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.118 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:fals
e olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0806 00:14:55.731109   62044 preload.go:131] Checking if preload exists for k8s version v1.30.3 and runtime crio
	I0806 00:14:55.731169   62044 ssh_runner.go:195] Run: sudo crictl images --output json
	I0806 00:14:55.795823   62044 crio.go:514] all images are preloaded for cri-o runtime.
	I0806 00:14:55.795857   62044 crio.go:433] Images already preloaded, skipping extraction
	I0806 00:14:55.795919   62044 ssh_runner.go:195] Run: sudo crictl images --output json
	I0806 00:14:55.836123   62044 crio.go:514] all images are preloaded for cri-o runtime.
	I0806 00:14:55.836147   62044 cache_images.go:84] Images are preloaded, skipping loading
	I0806 00:14:55.836157   62044 kubeadm.go:934] updating node { 192.168.39.118 8443 v1.30.3 crio true true} ...
	I0806 00:14:55.836287   62044 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.30.3/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=pause-161508 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.118
	
	[Install]
	 config:
	{KubernetesVersion:v1.30.3 ClusterName:pause-161508 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0806 00:14:55.836381   62044 ssh_runner.go:195] Run: crio config
	I0806 00:14:55.889286   62044 cni.go:84] Creating CNI manager for ""
	I0806 00:14:55.889312   62044 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0806 00:14:55.889323   62044 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0806 00:14:55.889351   62044 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.118 APIServerPort:8443 KubernetesVersion:v1.30.3 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:pause-161508 NodeName:pause-161508 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.118"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.118 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kub
ernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0806 00:14:55.889555   62044 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.118
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "pause-161508"
	  kubeletExtraArgs:
	    node-ip: 192.168.39.118
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.118"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.30.3
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0806 00:14:55.889623   62044 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.30.3
	I0806 00:14:55.932046   62044 binaries.go:44] Found k8s binaries, skipping transfer
	I0806 00:14:55.932135   62044 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0806 00:14:55.950419   62044 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (312 bytes)
	I0806 00:14:56.036763   62044 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0806 00:14:56.159694   62044 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2156 bytes)
	I0806 00:14:56.254787   62044 ssh_runner.go:195] Run: grep 192.168.39.118	control-plane.minikube.internal$ /etc/hosts
	I0806 00:14:56.288571   62044 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0806 00:14:56.575805   62044 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0806 00:14:56.754768   62044 certs.go:68] Setting up /home/jenkins/minikube-integration/19373-9606/.minikube/profiles/pause-161508 for IP: 192.168.39.118
	I0806 00:14:56.754791   62044 certs.go:194] generating shared ca certs ...
	I0806 00:14:56.754810   62044 certs.go:226] acquiring lock for ca certs: {Name:mkf35a042c1656d191f542eee7fa087aad4d29d8 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0806 00:14:56.755074   62044 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19373-9606/.minikube/ca.key
	I0806 00:14:56.755141   62044 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19373-9606/.minikube/proxy-client-ca.key
	I0806 00:14:56.755154   62044 certs.go:256] generating profile certs ...
	I0806 00:14:56.755260   62044 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19373-9606/.minikube/profiles/pause-161508/client.key
	I0806 00:14:56.755339   62044 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/19373-9606/.minikube/profiles/pause-161508/apiserver.key.423b175f
	I0806 00:14:56.755386   62044 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19373-9606/.minikube/profiles/pause-161508/proxy-client.key
	I0806 00:14:56.755522   62044 certs.go:484] found cert: /home/jenkins/minikube-integration/19373-9606/.minikube/certs/16792.pem (1338 bytes)
	W0806 00:14:56.755559   62044 certs.go:480] ignoring /home/jenkins/minikube-integration/19373-9606/.minikube/certs/16792_empty.pem, impossibly tiny 0 bytes
	I0806 00:14:56.755570   62044 certs.go:484] found cert: /home/jenkins/minikube-integration/19373-9606/.minikube/certs/ca-key.pem (1679 bytes)
	I0806 00:14:56.755607   62044 certs.go:484] found cert: /home/jenkins/minikube-integration/19373-9606/.minikube/certs/ca.pem (1082 bytes)
	I0806 00:14:56.755656   62044 certs.go:484] found cert: /home/jenkins/minikube-integration/19373-9606/.minikube/certs/cert.pem (1123 bytes)
	I0806 00:14:56.755693   62044 certs.go:484] found cert: /home/jenkins/minikube-integration/19373-9606/.minikube/certs/key.pem (1679 bytes)
	I0806 00:14:56.755748   62044 certs.go:484] found cert: /home/jenkins/minikube-integration/19373-9606/.minikube/files/etc/ssl/certs/167922.pem (1708 bytes)
	I0806 00:14:56.756618   62044 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19373-9606/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0806 00:14:56.879774   62044 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19373-9606/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0806 00:14:56.952696   62044 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19373-9606/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0806 00:14:57.016032   62044 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19373-9606/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0806 00:14:57.114988   62044 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19373-9606/.minikube/profiles/pause-161508/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1419 bytes)
	I0806 00:14:57.176152   62044 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19373-9606/.minikube/profiles/pause-161508/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0806 00:14:57.210978   62044 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19373-9606/.minikube/profiles/pause-161508/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0806 00:14:57.252853   62044 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19373-9606/.minikube/profiles/pause-161508/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0806 00:14:57.316820   62044 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19373-9606/.minikube/certs/16792.pem --> /usr/share/ca-certificates/16792.pem (1338 bytes)
	I0806 00:14:57.363002   62044 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19373-9606/.minikube/files/etc/ssl/certs/167922.pem --> /usr/share/ca-certificates/167922.pem (1708 bytes)
	I0806 00:14:57.399698   62044 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19373-9606/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0806 00:14:57.432814   62044 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0806 00:14:57.453941   62044 ssh_runner.go:195] Run: openssl version
	I0806 00:14:57.464156   62044 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/16792.pem && ln -fs /usr/share/ca-certificates/16792.pem /etc/ssl/certs/16792.pem"
	I0806 00:14:57.489040   62044 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/16792.pem
	I0806 00:14:57.494783   62044 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Aug  5 23:03 /usr/share/ca-certificates/16792.pem
	I0806 00:14:57.494877   62044 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/16792.pem
	I0806 00:14:57.504614   62044 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/16792.pem /etc/ssl/certs/51391683.0"
	I0806 00:14:57.517885   62044 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/167922.pem && ln -fs /usr/share/ca-certificates/167922.pem /etc/ssl/certs/167922.pem"
	I0806 00:14:57.532455   62044 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/167922.pem
	I0806 00:14:57.538611   62044 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Aug  5 23:03 /usr/share/ca-certificates/167922.pem
	I0806 00:14:57.538681   62044 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/167922.pem
	I0806 00:14:57.548094   62044 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/167922.pem /etc/ssl/certs/3ec20f2e.0"
	I0806 00:14:57.563012   62044 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0806 00:14:57.580706   62044 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0806 00:14:57.587499   62044 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Aug  5 22:50 /usr/share/ca-certificates/minikubeCA.pem
	I0806 00:14:57.587569   62044 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0806 00:14:57.600755   62044 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0806 00:14:57.617274   62044 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0806 00:14:57.625073   62044 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0806 00:14:57.633761   62044 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0806 00:14:57.642962   62044 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0806 00:14:57.651893   62044 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0806 00:14:57.660085   62044 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0806 00:14:57.675488   62044 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0806 00:14:57.683789   62044 kubeadm.go:392] StartCluster: {Name:pause-161508 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19339/minikube-v1.33.1-1722248113-19339-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2048 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.3 Cl
usterName:pause-161508 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.118 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false o
lm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0806 00:14:57.683936   62044 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0806 00:14:57.684025   62044 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0806 00:14:57.779721   62044 cri.go:89] found id: "7d8cf53ea71f671cd11c77d76585125000808e1e5e9dbdf057515fae3694c8c2"
	I0806 00:14:57.779749   62044 cri.go:89] found id: "6c3e3869967dcdea9538e99cfba9fa7cbeab8604b70330171ff36214ad65dc4f"
	I0806 00:14:57.779757   62044 cri.go:89] found id: "b5f13fe4c6e99948bd3db06aa7e20e2aa8073f836fe73e27f62926299efa70db"
	I0806 00:14:57.779765   62044 cri.go:89] found id: "1bf2df2d254dca2dd27d3eae24da873f45a9ff1fbdfc0ea1dd1a35201bcd069a"
	I0806 00:14:57.779771   62044 cri.go:89] found id: "e7bde654f01ecd95054cba7e1831b15349cfc28b44f4f1a6722bec18d022099a"
	I0806 00:14:57.779776   62044 cri.go:89] found id: "6471bcdcb4ee5e45f9f8c1500088cb267ab957b707b6c9091e097c704b2d66d6"
	I0806 00:14:57.779780   62044 cri.go:89] found id: "bfaba2e9c5b00ff3bf65111355285eff0b912f5fc7bfb869f50fb2fffad3292c"
	I0806 00:14:57.779785   62044 cri.go:89] found id: "97903d796b6207952efa4d432caf2c3e60811379a89eae5fb77e2fa8c1a1d028"
	I0806 00:14:57.779790   62044 cri.go:89] found id: "895560f466b423fe1dfc2c8b3564008271d04a68b72ddc661ae492d8d6fe1900"
	I0806 00:14:57.779799   62044 cri.go:89] found id: "675d1cd5f51ab58fac223676eede1d4e46868c8e294ae5a521cd08300f62038b"
	I0806 00:14:57.779804   62044 cri.go:89] found id: ""
	I0806 00:14:57.779859   62044 ssh_runner.go:195] Run: sudo runc list -f json

                                                
                                                
** /stderr **
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p pause-161508 -n pause-161508
helpers_test.go:244: <<< TestPause/serial/SecondStartNoReconfiguration FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestPause/serial/SecondStartNoReconfiguration]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p pause-161508 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p pause-161508 logs -n 25: (1.451811867s)
helpers_test.go:252: TestPause/serial/SecondStartNoReconfiguration logs: 
-- stdout --
	
	==> Audit <==
	|---------|---------------------------------------|---------------------------|---------|---------|---------------------|---------------------|
	| Command |                 Args                  |          Profile          |  User   | Version |     Start Time      |      End Time       |
	|---------|---------------------------------------|---------------------------|---------|---------|---------------------|---------------------|
	| start   | -p force-systemd-env-571298           | force-systemd-env-571298  | jenkins | v1.33.1 | 06 Aug 24 00:10 UTC | 06 Aug 24 00:11 UTC |
	|         | --memory=2048                         |                           |         |         |                     |                     |
	|         | --alsologtostderr                     |                           |         |         |                     |                     |
	|         | -v=5 --driver=kvm2                    |                           |         |         |                     |                     |
	|         | --container-runtime=crio              |                           |         |         |                     |                     |
	| start   | -p NoKubernetes-849515                | NoKubernetes-849515       | jenkins | v1.33.1 | 06 Aug 24 00:11 UTC | 06 Aug 24 00:11 UTC |
	|         | --no-kubernetes --driver=kvm2         |                           |         |         |                     |                     |
	|         | --container-runtime=crio              |                           |         |         |                     |                     |
	| delete  | -p force-systemd-env-571298           | force-systemd-env-571298  | jenkins | v1.33.1 | 06 Aug 24 00:11 UTC | 06 Aug 24 00:11 UTC |
	| delete  | -p offline-crio-820703                | offline-crio-820703       | jenkins | v1.33.1 | 06 Aug 24 00:11 UTC | 06 Aug 24 00:11 UTC |
	| start   | -p force-systemd-flag-936727          | force-systemd-flag-936727 | jenkins | v1.33.1 | 06 Aug 24 00:11 UTC | 06 Aug 24 00:12 UTC |
	|         | --memory=2048 --force-systemd         |                           |         |         |                     |                     |
	|         | --alsologtostderr                     |                           |         |         |                     |                     |
	|         | -v=5 --driver=kvm2                    |                           |         |         |                     |                     |
	|         | --container-runtime=crio              |                           |         |         |                     |                     |
	| start   | -p cert-expiration-272169             | cert-expiration-272169    | jenkins | v1.33.1 | 06 Aug 24 00:11 UTC | 06 Aug 24 00:12 UTC |
	|         | --memory=2048                         |                           |         |         |                     |                     |
	|         | --cert-expiration=3m                  |                           |         |         |                     |                     |
	|         | --driver=kvm2                         |                           |         |         |                     |                     |
	|         | --container-runtime=crio              |                           |         |         |                     |                     |
	| delete  | -p NoKubernetes-849515                | NoKubernetes-849515       | jenkins | v1.33.1 | 06 Aug 24 00:11 UTC | 06 Aug 24 00:11 UTC |
	| start   | -p NoKubernetes-849515                | NoKubernetes-849515       | jenkins | v1.33.1 | 06 Aug 24 00:11 UTC | 06 Aug 24 00:13 UTC |
	|         | --no-kubernetes --driver=kvm2         |                           |         |         |                     |                     |
	|         | --container-runtime=crio              |                           |         |         |                     |                     |
	| start   | -p running-upgrade-863913             | running-upgrade-863913    | jenkins | v1.33.1 | 06 Aug 24 00:12 UTC | 06 Aug 24 00:14 UTC |
	|         | --memory=2200                         |                           |         |         |                     |                     |
	|         | --alsologtostderr                     |                           |         |         |                     |                     |
	|         | -v=1 --driver=kvm2                    |                           |         |         |                     |                     |
	|         | --container-runtime=crio              |                           |         |         |                     |                     |
	| ssh     | force-systemd-flag-936727 ssh cat     | force-systemd-flag-936727 | jenkins | v1.33.1 | 06 Aug 24 00:12 UTC | 06 Aug 24 00:12 UTC |
	|         | /etc/crio/crio.conf.d/02-crio.conf    |                           |         |         |                     |                     |
	| delete  | -p force-systemd-flag-936727          | force-systemd-flag-936727 | jenkins | v1.33.1 | 06 Aug 24 00:12 UTC | 06 Aug 24 00:12 UTC |
	| start   | -p pause-161508 --memory=2048         | pause-161508              | jenkins | v1.33.1 | 06 Aug 24 00:12 UTC | 06 Aug 24 00:14 UTC |
	|         | --install-addons=false                |                           |         |         |                     |                     |
	|         | --wait=all --driver=kvm2              |                           |         |         |                     |                     |
	|         | --container-runtime=crio              |                           |         |         |                     |                     |
	| ssh     | -p NoKubernetes-849515 sudo           | NoKubernetes-849515       | jenkins | v1.33.1 | 06 Aug 24 00:13 UTC |                     |
	|         | systemctl is-active --quiet           |                           |         |         |                     |                     |
	|         | service kubelet                       |                           |         |         |                     |                     |
	| stop    | -p NoKubernetes-849515                | NoKubernetes-849515       | jenkins | v1.33.1 | 06 Aug 24 00:13 UTC | 06 Aug 24 00:13 UTC |
	| start   | -p NoKubernetes-849515                | NoKubernetes-849515       | jenkins | v1.33.1 | 06 Aug 24 00:13 UTC | 06 Aug 24 00:13 UTC |
	|         | --driver=kvm2                         |                           |         |         |                     |                     |
	|         | --container-runtime=crio              |                           |         |         |                     |                     |
	| ssh     | -p NoKubernetes-849515 sudo           | NoKubernetes-849515       | jenkins | v1.33.1 | 06 Aug 24 00:13 UTC |                     |
	|         | systemctl is-active --quiet           |                           |         |         |                     |                     |
	|         | service kubelet                       |                           |         |         |                     |                     |
	| delete  | -p NoKubernetes-849515                | NoKubernetes-849515       | jenkins | v1.33.1 | 06 Aug 24 00:13 UTC | 06 Aug 24 00:13 UTC |
	| start   | -p cert-options-323157                | cert-options-323157       | jenkins | v1.33.1 | 06 Aug 24 00:13 UTC | 06 Aug 24 00:14 UTC |
	|         | --memory=2048                         |                           |         |         |                     |                     |
	|         | --apiserver-ips=127.0.0.1             |                           |         |         |                     |                     |
	|         | --apiserver-ips=192.168.15.15         |                           |         |         |                     |                     |
	|         | --apiserver-names=localhost           |                           |         |         |                     |                     |
	|         | --apiserver-names=www.google.com      |                           |         |         |                     |                     |
	|         | --apiserver-port=8555                 |                           |         |         |                     |                     |
	|         | --driver=kvm2                         |                           |         |         |                     |                     |
	|         | --container-runtime=crio              |                           |         |         |                     |                     |
	| delete  | -p running-upgrade-863913             | running-upgrade-863913    | jenkins | v1.33.1 | 06 Aug 24 00:14 UTC | 06 Aug 24 00:14 UTC |
	| start   | -p kubernetes-upgrade-907863          | kubernetes-upgrade-907863 | jenkins | v1.33.1 | 06 Aug 24 00:14 UTC |                     |
	|         | --memory=2200                         |                           |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0          |                           |         |         |                     |                     |
	|         | --alsologtostderr                     |                           |         |         |                     |                     |
	|         | -v=1 --driver=kvm2                    |                           |         |         |                     |                     |
	|         | --container-runtime=crio              |                           |         |         |                     |                     |
	| start   | -p pause-161508                       | pause-161508              | jenkins | v1.33.1 | 06 Aug 24 00:14 UTC | 06 Aug 24 00:15 UTC |
	|         | --alsologtostderr                     |                           |         |         |                     |                     |
	|         | -v=1 --driver=kvm2                    |                           |         |         |                     |                     |
	|         | --container-runtime=crio              |                           |         |         |                     |                     |
	| ssh     | cert-options-323157 ssh               | cert-options-323157       | jenkins | v1.33.1 | 06 Aug 24 00:14 UTC | 06 Aug 24 00:14 UTC |
	|         | openssl x509 -text -noout -in         |                           |         |         |                     |                     |
	|         | /var/lib/minikube/certs/apiserver.crt |                           |         |         |                     |                     |
	| ssh     | -p cert-options-323157 -- sudo        | cert-options-323157       | jenkins | v1.33.1 | 06 Aug 24 00:14 UTC | 06 Aug 24 00:14 UTC |
	|         | cat /etc/kubernetes/admin.conf        |                           |         |         |                     |                     |
	| delete  | -p cert-options-323157                | cert-options-323157       | jenkins | v1.33.1 | 06 Aug 24 00:14 UTC | 06 Aug 24 00:14 UTC |
	| start   | -p stopped-upgrade-936666             | minikube                  | jenkins | v1.26.0 | 06 Aug 24 00:14 UTC |                     |
	|         | --memory=2200 --vm-driver=kvm2        |                           |         |         |                     |                     |
	|         |  --container-runtime=crio             |                           |         |         |                     |                     |
	|---------|---------------------------------------|---------------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/08/06 00:14:46
	Running on machine: ubuntu-20-agent-10
	Binary: Built with gc go1.18.3 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0806 00:14:46.662867   62278 out.go:296] Setting OutFile to fd 1 ...
	I0806 00:14:46.663143   62278 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0806 00:14:46.663148   62278 out.go:309] Setting ErrFile to fd 2...
	I0806 00:14:46.663151   62278 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0806 00:14:46.663812   62278 root.go:329] Updating PATH: /home/jenkins/minikube-integration/19373-9606/.minikube/bin
	I0806 00:14:46.664069   62278 out.go:303] Setting JSON to false
	I0806 00:14:46.664998   62278 start.go:115] hostinfo: {"hostname":"ubuntu-20-agent-10","uptime":7033,"bootTime":1722896254,"procs":216,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1065-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0806 00:14:46.665064   62278 start.go:125] virtualization: kvm guest
	I0806 00:14:46.667597   62278 out.go:177] * [stopped-upgrade-936666] minikube v1.26.0 on Ubuntu 20.04 (kvm/amd64)
	I0806 00:14:46.669191   62278 out.go:177]   - MINIKUBE_LOCATION=19373
	I0806 00:14:46.669206   62278 notify.go:193] Checking for updates...
	I0806 00:14:46.670831   62278 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0806 00:14:46.672530   62278 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19373-9606/.minikube
	I0806 00:14:46.674266   62278 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0806 00:14:46.675792   62278 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0806 00:14:46.677080   62278 out.go:177]   - KUBECONFIG=/tmp/legacy_kubeconfig3928839946
	I0806 00:14:46.678954   62278 config.go:178] Loaded profile config "cert-expiration-272169": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0806 00:14:46.679117   62278 config.go:178] Loaded profile config "kubernetes-upgrade-907863": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.20.0
	I0806 00:14:46.679337   62278 config.go:178] Loaded profile config "pause-161508": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0806 00:14:46.679434   62278 driver.go:360] Setting default libvirt URI to qemu:///system
	I0806 00:14:46.721814   62278 out.go:177] * Using the kvm2 driver based on user configuration
	I0806 00:14:46.723356   62278 start.go:284] selected driver: kvm2
	I0806 00:14:46.723371   62278 start.go:805] validating driver "kvm2" against <nil>
	I0806 00:14:46.723399   62278 start.go:816] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0806 00:14:46.724368   62278 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0806 00:14:46.724580   62278 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/19373-9606/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0806 00:14:46.741139   62278 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.33.1
	I0806 00:14:46.741248   62278 start_flags.go:296] no existing cluster config was found, will generate one from the flags 
	I0806 00:14:46.741476   62278 start_flags.go:835] Wait components to verify : map[apiserver:true system_pods:true]
	I0806 00:14:46.741498   62278 cni.go:95] Creating CNI manager for ""
	I0806 00:14:46.741509   62278 cni.go:165] "kvm2" driver + crio runtime found, recommending bridge
	I0806 00:14:46.741516   62278 start_flags.go:305] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0806 00:14:46.741524   62278 start_flags.go:310] config:
	{Name:stopped-upgrade-936666 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.32@sha256:9190bd2393eae887316c97a74370b7d5dad8f0b2ef91ac2662bc36f7ef8e0b95 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.24.1 ClusterName:stopped-upgrade-936666 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket:
NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath:}
	I0806 00:14:46.741640   62278 iso.go:128] acquiring lock: {Name:mk3d6c03f606a5ab492378ade22ea2c351c6325a Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0806 00:14:46.744089   62278 out.go:177] * Starting control plane node stopped-upgrade-936666 in cluster stopped-upgrade-936666
	I0806 00:14:46.369648   62044 machine.go:94] provisionDockerMachine start ...
	I0806 00:14:46.369671   62044 main.go:141] libmachine: (pause-161508) Calling .DriverName
	I0806 00:14:46.369843   62044 main.go:141] libmachine: (pause-161508) Calling .GetSSHHostname
	I0806 00:14:46.373341   62044 main.go:141] libmachine: (pause-161508) DBG | domain pause-161508 has defined MAC address 52:54:00:33:8e:29 in network mk-pause-161508
	I0806 00:14:46.373936   62044 main.go:141] libmachine: (pause-161508) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:33:8e:29", ip: ""} in network mk-pause-161508: {Iface:virbr3 ExpiryTime:2024-08-06 01:13:19 +0000 UTC Type:0 Mac:52:54:00:33:8e:29 Iaid: IPaddr:192.168.39.118 Prefix:24 Hostname:pause-161508 Clientid:01:52:54:00:33:8e:29}
	I0806 00:14:46.373958   62044 main.go:141] libmachine: (pause-161508) DBG | domain pause-161508 has defined IP address 192.168.39.118 and MAC address 52:54:00:33:8e:29 in network mk-pause-161508
	I0806 00:14:46.374181   62044 main.go:141] libmachine: (pause-161508) Calling .GetSSHPort
	I0806 00:14:46.374359   62044 main.go:141] libmachine: (pause-161508) Calling .GetSSHKeyPath
	I0806 00:14:46.374515   62044 main.go:141] libmachine: (pause-161508) Calling .GetSSHKeyPath
	I0806 00:14:46.374633   62044 main.go:141] libmachine: (pause-161508) Calling .GetSSHUsername
	I0806 00:14:46.374836   62044 main.go:141] libmachine: Using SSH client type: native
	I0806 00:14:46.375099   62044 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.118 22 <nil> <nil>}
	I0806 00:14:46.375113   62044 main.go:141] libmachine: About to run SSH command:
	hostname
	I0806 00:14:46.484502   62044 main.go:141] libmachine: SSH cmd err, output: <nil>: pause-161508
	
	I0806 00:14:46.484534   62044 main.go:141] libmachine: (pause-161508) Calling .GetMachineName
	I0806 00:14:46.484794   62044 buildroot.go:166] provisioning hostname "pause-161508"
	I0806 00:14:46.484826   62044 main.go:141] libmachine: (pause-161508) Calling .GetMachineName
	I0806 00:14:46.485005   62044 main.go:141] libmachine: (pause-161508) Calling .GetSSHHostname
	I0806 00:14:46.488282   62044 main.go:141] libmachine: (pause-161508) DBG | domain pause-161508 has defined MAC address 52:54:00:33:8e:29 in network mk-pause-161508
	I0806 00:14:46.488626   62044 main.go:141] libmachine: (pause-161508) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:33:8e:29", ip: ""} in network mk-pause-161508: {Iface:virbr3 ExpiryTime:2024-08-06 01:13:19 +0000 UTC Type:0 Mac:52:54:00:33:8e:29 Iaid: IPaddr:192.168.39.118 Prefix:24 Hostname:pause-161508 Clientid:01:52:54:00:33:8e:29}
	I0806 00:14:46.488661   62044 main.go:141] libmachine: (pause-161508) DBG | domain pause-161508 has defined IP address 192.168.39.118 and MAC address 52:54:00:33:8e:29 in network mk-pause-161508
	I0806 00:14:46.488775   62044 main.go:141] libmachine: (pause-161508) Calling .GetSSHPort
	I0806 00:14:46.488956   62044 main.go:141] libmachine: (pause-161508) Calling .GetSSHKeyPath
	I0806 00:14:46.489138   62044 main.go:141] libmachine: (pause-161508) Calling .GetSSHKeyPath
	I0806 00:14:46.489280   62044 main.go:141] libmachine: (pause-161508) Calling .GetSSHUsername
	I0806 00:14:46.489470   62044 main.go:141] libmachine: Using SSH client type: native
	I0806 00:14:46.489668   62044 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.118 22 <nil> <nil>}
	I0806 00:14:46.489687   62044 main.go:141] libmachine: About to run SSH command:
	sudo hostname pause-161508 && echo "pause-161508" | sudo tee /etc/hostname
	I0806 00:14:46.627230   62044 main.go:141] libmachine: SSH cmd err, output: <nil>: pause-161508
	
	I0806 00:14:46.627262   62044 main.go:141] libmachine: (pause-161508) Calling .GetSSHHostname
	I0806 00:14:46.630624   62044 main.go:141] libmachine: (pause-161508) DBG | domain pause-161508 has defined MAC address 52:54:00:33:8e:29 in network mk-pause-161508
	I0806 00:14:46.631014   62044 main.go:141] libmachine: (pause-161508) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:33:8e:29", ip: ""} in network mk-pause-161508: {Iface:virbr3 ExpiryTime:2024-08-06 01:13:19 +0000 UTC Type:0 Mac:52:54:00:33:8e:29 Iaid: IPaddr:192.168.39.118 Prefix:24 Hostname:pause-161508 Clientid:01:52:54:00:33:8e:29}
	I0806 00:14:46.631044   62044 main.go:141] libmachine: (pause-161508) DBG | domain pause-161508 has defined IP address 192.168.39.118 and MAC address 52:54:00:33:8e:29 in network mk-pause-161508
	I0806 00:14:46.631467   62044 main.go:141] libmachine: (pause-161508) Calling .GetSSHPort
	I0806 00:14:46.631646   62044 main.go:141] libmachine: (pause-161508) Calling .GetSSHKeyPath
	I0806 00:14:46.631821   62044 main.go:141] libmachine: (pause-161508) Calling .GetSSHKeyPath
	I0806 00:14:46.632028   62044 main.go:141] libmachine: (pause-161508) Calling .GetSSHUsername
	I0806 00:14:46.632193   62044 main.go:141] libmachine: Using SSH client type: native
	I0806 00:14:46.632425   62044 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.118 22 <nil> <nil>}
	I0806 00:14:46.632449   62044 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\spause-161508' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 pause-161508/g' /etc/hosts;
				else 
					echo '127.0.1.1 pause-161508' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0806 00:14:46.752560   62044 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0806 00:14:46.752602   62044 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19373-9606/.minikube CaCertPath:/home/jenkins/minikube-integration/19373-9606/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19373-9606/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19373-9606/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19373-9606/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19373-9606/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19373-9606/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19373-9606/.minikube}
	I0806 00:14:46.752646   62044 buildroot.go:174] setting up certificates
	I0806 00:14:46.752658   62044 provision.go:84] configureAuth start
	I0806 00:14:46.752672   62044 main.go:141] libmachine: (pause-161508) Calling .GetMachineName
	I0806 00:14:46.752976   62044 main.go:141] libmachine: (pause-161508) Calling .GetIP
	I0806 00:14:46.755947   62044 main.go:141] libmachine: (pause-161508) DBG | domain pause-161508 has defined MAC address 52:54:00:33:8e:29 in network mk-pause-161508
	I0806 00:14:46.756352   62044 main.go:141] libmachine: (pause-161508) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:33:8e:29", ip: ""} in network mk-pause-161508: {Iface:virbr3 ExpiryTime:2024-08-06 01:13:19 +0000 UTC Type:0 Mac:52:54:00:33:8e:29 Iaid: IPaddr:192.168.39.118 Prefix:24 Hostname:pause-161508 Clientid:01:52:54:00:33:8e:29}
	I0806 00:14:46.756377   62044 main.go:141] libmachine: (pause-161508) DBG | domain pause-161508 has defined IP address 192.168.39.118 and MAC address 52:54:00:33:8e:29 in network mk-pause-161508
	I0806 00:14:46.756591   62044 main.go:141] libmachine: (pause-161508) Calling .GetSSHHostname
	I0806 00:14:46.759702   62044 main.go:141] libmachine: (pause-161508) DBG | domain pause-161508 has defined MAC address 52:54:00:33:8e:29 in network mk-pause-161508
	I0806 00:14:46.760112   62044 main.go:141] libmachine: (pause-161508) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:33:8e:29", ip: ""} in network mk-pause-161508: {Iface:virbr3 ExpiryTime:2024-08-06 01:13:19 +0000 UTC Type:0 Mac:52:54:00:33:8e:29 Iaid: IPaddr:192.168.39.118 Prefix:24 Hostname:pause-161508 Clientid:01:52:54:00:33:8e:29}
	I0806 00:14:46.760141   62044 main.go:141] libmachine: (pause-161508) DBG | domain pause-161508 has defined IP address 192.168.39.118 and MAC address 52:54:00:33:8e:29 in network mk-pause-161508
	I0806 00:14:46.760359   62044 provision.go:143] copyHostCerts
	I0806 00:14:46.760426   62044 exec_runner.go:144] found /home/jenkins/minikube-integration/19373-9606/.minikube/ca.pem, removing ...
	I0806 00:14:46.760437   62044 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19373-9606/.minikube/ca.pem
	I0806 00:14:46.760495   62044 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19373-9606/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19373-9606/.minikube/ca.pem (1082 bytes)
	I0806 00:14:46.760592   62044 exec_runner.go:144] found /home/jenkins/minikube-integration/19373-9606/.minikube/cert.pem, removing ...
	I0806 00:14:46.760601   62044 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19373-9606/.minikube/cert.pem
	I0806 00:14:46.760625   62044 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19373-9606/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19373-9606/.minikube/cert.pem (1123 bytes)
	I0806 00:14:46.760711   62044 exec_runner.go:144] found /home/jenkins/minikube-integration/19373-9606/.minikube/key.pem, removing ...
	I0806 00:14:46.760720   62044 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19373-9606/.minikube/key.pem
	I0806 00:14:46.760739   62044 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19373-9606/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19373-9606/.minikube/key.pem (1679 bytes)
	I0806 00:14:46.760810   62044 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19373-9606/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19373-9606/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19373-9606/.minikube/certs/ca-key.pem org=jenkins.pause-161508 san=[127.0.0.1 192.168.39.118 localhost minikube pause-161508]
	I0806 00:14:44.852616   61720 main.go:141] libmachine: (kubernetes-upgrade-907863) DBG | domain kubernetes-upgrade-907863 has defined MAC address 52:54:00:f6:6f:99 in network mk-kubernetes-upgrade-907863
	I0806 00:14:44.853078   61720 main.go:141] libmachine: (kubernetes-upgrade-907863) Found IP for machine: 192.168.72.112
	I0806 00:14:44.853113   61720 main.go:141] libmachine: (kubernetes-upgrade-907863) DBG | domain kubernetes-upgrade-907863 has current primary IP address 192.168.72.112 and MAC address 52:54:00:f6:6f:99 in network mk-kubernetes-upgrade-907863
	I0806 00:14:44.853123   61720 main.go:141] libmachine: (kubernetes-upgrade-907863) Reserving static IP address...
	I0806 00:14:44.853598   61720 main.go:141] libmachine: (kubernetes-upgrade-907863) DBG | unable to find host DHCP lease matching {name: "kubernetes-upgrade-907863", mac: "52:54:00:f6:6f:99", ip: "192.168.72.112"} in network mk-kubernetes-upgrade-907863
	I0806 00:14:44.937213   61720 main.go:141] libmachine: (kubernetes-upgrade-907863) DBG | Getting to WaitForSSH function...
	I0806 00:14:44.937245   61720 main.go:141] libmachine: (kubernetes-upgrade-907863) Reserved static IP address: 192.168.72.112
	I0806 00:14:44.937261   61720 main.go:141] libmachine: (kubernetes-upgrade-907863) Waiting for SSH to be available...
	I0806 00:14:44.940137   61720 main.go:141] libmachine: (kubernetes-upgrade-907863) DBG | domain kubernetes-upgrade-907863 has defined MAC address 52:54:00:f6:6f:99 in network mk-kubernetes-upgrade-907863
	I0806 00:14:44.940632   61720 main.go:141] libmachine: (kubernetes-upgrade-907863) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f6:6f:99", ip: ""} in network mk-kubernetes-upgrade-907863: {Iface:virbr4 ExpiryTime:2024-08-06 01:14:38 +0000 UTC Type:0 Mac:52:54:00:f6:6f:99 Iaid: IPaddr:192.168.72.112 Prefix:24 Hostname:minikube Clientid:01:52:54:00:f6:6f:99}
	I0806 00:14:44.940673   61720 main.go:141] libmachine: (kubernetes-upgrade-907863) DBG | domain kubernetes-upgrade-907863 has defined IP address 192.168.72.112 and MAC address 52:54:00:f6:6f:99 in network mk-kubernetes-upgrade-907863
	I0806 00:14:44.940801   61720 main.go:141] libmachine: (kubernetes-upgrade-907863) DBG | Using SSH client type: external
	I0806 00:14:44.940825   61720 main.go:141] libmachine: (kubernetes-upgrade-907863) DBG | Using SSH private key: /home/jenkins/minikube-integration/19373-9606/.minikube/machines/kubernetes-upgrade-907863/id_rsa (-rw-------)
	I0806 00:14:44.940862   61720 main.go:141] libmachine: (kubernetes-upgrade-907863) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.72.112 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19373-9606/.minikube/machines/kubernetes-upgrade-907863/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0806 00:14:44.940880   61720 main.go:141] libmachine: (kubernetes-upgrade-907863) DBG | About to run SSH command:
	I0806 00:14:44.940892   61720 main.go:141] libmachine: (kubernetes-upgrade-907863) DBG | exit 0
	I0806 00:14:45.063499   61720 main.go:141] libmachine: (kubernetes-upgrade-907863) DBG | SSH cmd err, output: <nil>: 
	I0806 00:14:45.063807   61720 main.go:141] libmachine: (kubernetes-upgrade-907863) KVM machine creation complete!
	I0806 00:14:45.064161   61720 main.go:141] libmachine: (kubernetes-upgrade-907863) Calling .GetConfigRaw
	I0806 00:14:45.064785   61720 main.go:141] libmachine: (kubernetes-upgrade-907863) Calling .DriverName
	I0806 00:14:45.064974   61720 main.go:141] libmachine: (kubernetes-upgrade-907863) Calling .DriverName
	I0806 00:14:45.065103   61720 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I0806 00:14:45.065125   61720 main.go:141] libmachine: (kubernetes-upgrade-907863) Calling .GetState
	I0806 00:14:45.066700   61720 main.go:141] libmachine: Detecting operating system of created instance...
	I0806 00:14:45.066716   61720 main.go:141] libmachine: Waiting for SSH to be available...
	I0806 00:14:45.066724   61720 main.go:141] libmachine: Getting to WaitForSSH function...
	I0806 00:14:45.066732   61720 main.go:141] libmachine: (kubernetes-upgrade-907863) Calling .GetSSHHostname
	I0806 00:14:45.069985   61720 main.go:141] libmachine: (kubernetes-upgrade-907863) DBG | domain kubernetes-upgrade-907863 has defined MAC address 52:54:00:f6:6f:99 in network mk-kubernetes-upgrade-907863
	I0806 00:14:45.070424   61720 main.go:141] libmachine: (kubernetes-upgrade-907863) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f6:6f:99", ip: ""} in network mk-kubernetes-upgrade-907863: {Iface:virbr4 ExpiryTime:2024-08-06 01:14:38 +0000 UTC Type:0 Mac:52:54:00:f6:6f:99 Iaid: IPaddr:192.168.72.112 Prefix:24 Hostname:kubernetes-upgrade-907863 Clientid:01:52:54:00:f6:6f:99}
	I0806 00:14:45.070453   61720 main.go:141] libmachine: (kubernetes-upgrade-907863) DBG | domain kubernetes-upgrade-907863 has defined IP address 192.168.72.112 and MAC address 52:54:00:f6:6f:99 in network mk-kubernetes-upgrade-907863
	I0806 00:14:45.070630   61720 main.go:141] libmachine: (kubernetes-upgrade-907863) Calling .GetSSHPort
	I0806 00:14:45.070807   61720 main.go:141] libmachine: (kubernetes-upgrade-907863) Calling .GetSSHKeyPath
	I0806 00:14:45.071003   61720 main.go:141] libmachine: (kubernetes-upgrade-907863) Calling .GetSSHKeyPath
	I0806 00:14:45.071149   61720 main.go:141] libmachine: (kubernetes-upgrade-907863) Calling .GetSSHUsername
	I0806 00:14:45.071287   61720 main.go:141] libmachine: Using SSH client type: native
	I0806 00:14:45.071475   61720 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.72.112 22 <nil> <nil>}
	I0806 00:14:45.071492   61720 main.go:141] libmachine: About to run SSH command:
	exit 0
	I0806 00:14:45.178702   61720 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0806 00:14:45.178751   61720 main.go:141] libmachine: Detecting the provisioner...
	I0806 00:14:45.178767   61720 main.go:141] libmachine: (kubernetes-upgrade-907863) Calling .GetSSHHostname
	I0806 00:14:45.182067   61720 main.go:141] libmachine: (kubernetes-upgrade-907863) DBG | domain kubernetes-upgrade-907863 has defined MAC address 52:54:00:f6:6f:99 in network mk-kubernetes-upgrade-907863
	I0806 00:14:45.182470   61720 main.go:141] libmachine: (kubernetes-upgrade-907863) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f6:6f:99", ip: ""} in network mk-kubernetes-upgrade-907863: {Iface:virbr4 ExpiryTime:2024-08-06 01:14:38 +0000 UTC Type:0 Mac:52:54:00:f6:6f:99 Iaid: IPaddr:192.168.72.112 Prefix:24 Hostname:kubernetes-upgrade-907863 Clientid:01:52:54:00:f6:6f:99}
	I0806 00:14:45.182517   61720 main.go:141] libmachine: (kubernetes-upgrade-907863) DBG | domain kubernetes-upgrade-907863 has defined IP address 192.168.72.112 and MAC address 52:54:00:f6:6f:99 in network mk-kubernetes-upgrade-907863
	I0806 00:14:45.182630   61720 main.go:141] libmachine: (kubernetes-upgrade-907863) Calling .GetSSHPort
	I0806 00:14:45.182863   61720 main.go:141] libmachine: (kubernetes-upgrade-907863) Calling .GetSSHKeyPath
	I0806 00:14:45.183077   61720 main.go:141] libmachine: (kubernetes-upgrade-907863) Calling .GetSSHKeyPath
	I0806 00:14:45.183250   61720 main.go:141] libmachine: (kubernetes-upgrade-907863) Calling .GetSSHUsername
	I0806 00:14:45.183416   61720 main.go:141] libmachine: Using SSH client type: native
	I0806 00:14:45.183625   61720 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.72.112 22 <nil> <nil>}
	I0806 00:14:45.183636   61720 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I0806 00:14:45.283887   61720 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2023.02.9-dirty
	ID=buildroot
	VERSION_ID=2023.02.9
	PRETTY_NAME="Buildroot 2023.02.9"
	
	I0806 00:14:45.283956   61720 main.go:141] libmachine: found compatible host: buildroot
	I0806 00:14:45.283966   61720 main.go:141] libmachine: Provisioning with buildroot...
	I0806 00:14:45.283978   61720 main.go:141] libmachine: (kubernetes-upgrade-907863) Calling .GetMachineName
	I0806 00:14:45.284240   61720 buildroot.go:166] provisioning hostname "kubernetes-upgrade-907863"
	I0806 00:14:45.284270   61720 main.go:141] libmachine: (kubernetes-upgrade-907863) Calling .GetMachineName
	I0806 00:14:45.284472   61720 main.go:141] libmachine: (kubernetes-upgrade-907863) Calling .GetSSHHostname
	I0806 00:14:45.287574   61720 main.go:141] libmachine: (kubernetes-upgrade-907863) DBG | domain kubernetes-upgrade-907863 has defined MAC address 52:54:00:f6:6f:99 in network mk-kubernetes-upgrade-907863
	I0806 00:14:45.287912   61720 main.go:141] libmachine: (kubernetes-upgrade-907863) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f6:6f:99", ip: ""} in network mk-kubernetes-upgrade-907863: {Iface:virbr4 ExpiryTime:2024-08-06 01:14:38 +0000 UTC Type:0 Mac:52:54:00:f6:6f:99 Iaid: IPaddr:192.168.72.112 Prefix:24 Hostname:kubernetes-upgrade-907863 Clientid:01:52:54:00:f6:6f:99}
	I0806 00:14:45.287955   61720 main.go:141] libmachine: (kubernetes-upgrade-907863) DBG | domain kubernetes-upgrade-907863 has defined IP address 192.168.72.112 and MAC address 52:54:00:f6:6f:99 in network mk-kubernetes-upgrade-907863
	I0806 00:14:45.288147   61720 main.go:141] libmachine: (kubernetes-upgrade-907863) Calling .GetSSHPort
	I0806 00:14:45.288338   61720 main.go:141] libmachine: (kubernetes-upgrade-907863) Calling .GetSSHKeyPath
	I0806 00:14:45.288509   61720 main.go:141] libmachine: (kubernetes-upgrade-907863) Calling .GetSSHKeyPath
	I0806 00:14:45.288713   61720 main.go:141] libmachine: (kubernetes-upgrade-907863) Calling .GetSSHUsername
	I0806 00:14:45.288922   61720 main.go:141] libmachine: Using SSH client type: native
	I0806 00:14:45.289156   61720 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.72.112 22 <nil> <nil>}
	I0806 00:14:45.289167   61720 main.go:141] libmachine: About to run SSH command:
	sudo hostname kubernetes-upgrade-907863 && echo "kubernetes-upgrade-907863" | sudo tee /etc/hostname
	I0806 00:14:45.413866   61720 main.go:141] libmachine: SSH cmd err, output: <nil>: kubernetes-upgrade-907863
	
	I0806 00:14:45.413901   61720 main.go:141] libmachine: (kubernetes-upgrade-907863) Calling .GetSSHHostname
	I0806 00:14:45.417554   61720 main.go:141] libmachine: (kubernetes-upgrade-907863) DBG | domain kubernetes-upgrade-907863 has defined MAC address 52:54:00:f6:6f:99 in network mk-kubernetes-upgrade-907863
	I0806 00:14:45.418009   61720 main.go:141] libmachine: (kubernetes-upgrade-907863) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f6:6f:99", ip: ""} in network mk-kubernetes-upgrade-907863: {Iface:virbr4 ExpiryTime:2024-08-06 01:14:38 +0000 UTC Type:0 Mac:52:54:00:f6:6f:99 Iaid: IPaddr:192.168.72.112 Prefix:24 Hostname:kubernetes-upgrade-907863 Clientid:01:52:54:00:f6:6f:99}
	I0806 00:14:45.418043   61720 main.go:141] libmachine: (kubernetes-upgrade-907863) DBG | domain kubernetes-upgrade-907863 has defined IP address 192.168.72.112 and MAC address 52:54:00:f6:6f:99 in network mk-kubernetes-upgrade-907863
	I0806 00:14:45.418153   61720 main.go:141] libmachine: (kubernetes-upgrade-907863) Calling .GetSSHPort
	I0806 00:14:45.418331   61720 main.go:141] libmachine: (kubernetes-upgrade-907863) Calling .GetSSHKeyPath
	I0806 00:14:45.418573   61720 main.go:141] libmachine: (kubernetes-upgrade-907863) Calling .GetSSHKeyPath
	I0806 00:14:45.418717   61720 main.go:141] libmachine: (kubernetes-upgrade-907863) Calling .GetSSHUsername
	I0806 00:14:45.418894   61720 main.go:141] libmachine: Using SSH client type: native
	I0806 00:14:45.419083   61720 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.72.112 22 <nil> <nil>}
	I0806 00:14:45.419103   61720 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\skubernetes-upgrade-907863' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 kubernetes-upgrade-907863/g' /etc/hosts;
				else 
					echo '127.0.1.1 kubernetes-upgrade-907863' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0806 00:14:45.530368   61720 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0806 00:14:45.530403   61720 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19373-9606/.minikube CaCertPath:/home/jenkins/minikube-integration/19373-9606/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19373-9606/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19373-9606/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19373-9606/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19373-9606/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19373-9606/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19373-9606/.minikube}
	I0806 00:14:45.530459   61720 buildroot.go:174] setting up certificates
	I0806 00:14:45.530478   61720 provision.go:84] configureAuth start
	I0806 00:14:45.530497   61720 main.go:141] libmachine: (kubernetes-upgrade-907863) Calling .GetMachineName
	I0806 00:14:45.530793   61720 main.go:141] libmachine: (kubernetes-upgrade-907863) Calling .GetIP
	I0806 00:14:45.533849   61720 main.go:141] libmachine: (kubernetes-upgrade-907863) DBG | domain kubernetes-upgrade-907863 has defined MAC address 52:54:00:f6:6f:99 in network mk-kubernetes-upgrade-907863
	I0806 00:14:45.534237   61720 main.go:141] libmachine: (kubernetes-upgrade-907863) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f6:6f:99", ip: ""} in network mk-kubernetes-upgrade-907863: {Iface:virbr4 ExpiryTime:2024-08-06 01:14:38 +0000 UTC Type:0 Mac:52:54:00:f6:6f:99 Iaid: IPaddr:192.168.72.112 Prefix:24 Hostname:kubernetes-upgrade-907863 Clientid:01:52:54:00:f6:6f:99}
	I0806 00:14:45.534262   61720 main.go:141] libmachine: (kubernetes-upgrade-907863) DBG | domain kubernetes-upgrade-907863 has defined IP address 192.168.72.112 and MAC address 52:54:00:f6:6f:99 in network mk-kubernetes-upgrade-907863
	I0806 00:14:45.534404   61720 main.go:141] libmachine: (kubernetes-upgrade-907863) Calling .GetSSHHostname
	I0806 00:14:45.536544   61720 main.go:141] libmachine: (kubernetes-upgrade-907863) DBG | domain kubernetes-upgrade-907863 has defined MAC address 52:54:00:f6:6f:99 in network mk-kubernetes-upgrade-907863
	I0806 00:14:45.536851   61720 main.go:141] libmachine: (kubernetes-upgrade-907863) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f6:6f:99", ip: ""} in network mk-kubernetes-upgrade-907863: {Iface:virbr4 ExpiryTime:2024-08-06 01:14:38 +0000 UTC Type:0 Mac:52:54:00:f6:6f:99 Iaid: IPaddr:192.168.72.112 Prefix:24 Hostname:kubernetes-upgrade-907863 Clientid:01:52:54:00:f6:6f:99}
	I0806 00:14:45.536890   61720 main.go:141] libmachine: (kubernetes-upgrade-907863) DBG | domain kubernetes-upgrade-907863 has defined IP address 192.168.72.112 and MAC address 52:54:00:f6:6f:99 in network mk-kubernetes-upgrade-907863
	I0806 00:14:45.537001   61720 provision.go:143] copyHostCerts
	I0806 00:14:45.537066   61720 exec_runner.go:144] found /home/jenkins/minikube-integration/19373-9606/.minikube/key.pem, removing ...
	I0806 00:14:45.537083   61720 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19373-9606/.minikube/key.pem
	I0806 00:14:45.537142   61720 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19373-9606/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19373-9606/.minikube/key.pem (1679 bytes)
	I0806 00:14:45.537260   61720 exec_runner.go:144] found /home/jenkins/minikube-integration/19373-9606/.minikube/ca.pem, removing ...
	I0806 00:14:45.537272   61720 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19373-9606/.minikube/ca.pem
	I0806 00:14:45.537309   61720 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19373-9606/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19373-9606/.minikube/ca.pem (1082 bytes)
	I0806 00:14:45.537395   61720 exec_runner.go:144] found /home/jenkins/minikube-integration/19373-9606/.minikube/cert.pem, removing ...
	I0806 00:14:45.537405   61720 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19373-9606/.minikube/cert.pem
	I0806 00:14:45.537432   61720 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19373-9606/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19373-9606/.minikube/cert.pem (1123 bytes)
	I0806 00:14:45.537496   61720 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19373-9606/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19373-9606/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19373-9606/.minikube/certs/ca-key.pem org=jenkins.kubernetes-upgrade-907863 san=[127.0.0.1 192.168.72.112 kubernetes-upgrade-907863 localhost minikube]
	I0806 00:14:45.648251   61720 provision.go:177] copyRemoteCerts
	I0806 00:14:45.648303   61720 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0806 00:14:45.648333   61720 main.go:141] libmachine: (kubernetes-upgrade-907863) Calling .GetSSHHostname
	I0806 00:14:45.650992   61720 main.go:141] libmachine: (kubernetes-upgrade-907863) DBG | domain kubernetes-upgrade-907863 has defined MAC address 52:54:00:f6:6f:99 in network mk-kubernetes-upgrade-907863
	I0806 00:14:45.651510   61720 main.go:141] libmachine: (kubernetes-upgrade-907863) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f6:6f:99", ip: ""} in network mk-kubernetes-upgrade-907863: {Iface:virbr4 ExpiryTime:2024-08-06 01:14:38 +0000 UTC Type:0 Mac:52:54:00:f6:6f:99 Iaid: IPaddr:192.168.72.112 Prefix:24 Hostname:kubernetes-upgrade-907863 Clientid:01:52:54:00:f6:6f:99}
	I0806 00:14:45.651534   61720 main.go:141] libmachine: (kubernetes-upgrade-907863) DBG | domain kubernetes-upgrade-907863 has defined IP address 192.168.72.112 and MAC address 52:54:00:f6:6f:99 in network mk-kubernetes-upgrade-907863
	I0806 00:14:45.651720   61720 main.go:141] libmachine: (kubernetes-upgrade-907863) Calling .GetSSHPort
	I0806 00:14:45.651912   61720 main.go:141] libmachine: (kubernetes-upgrade-907863) Calling .GetSSHKeyPath
	I0806 00:14:45.652105   61720 main.go:141] libmachine: (kubernetes-upgrade-907863) Calling .GetSSHUsername
	I0806 00:14:45.652257   61720 sshutil.go:53] new ssh client: &{IP:192.168.72.112 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19373-9606/.minikube/machines/kubernetes-upgrade-907863/id_rsa Username:docker}
	I0806 00:14:45.733623   61720 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19373-9606/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0806 00:14:45.759216   61720 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19373-9606/.minikube/machines/server.pem --> /etc/docker/server.pem (1241 bytes)
	I0806 00:14:45.785907   61720 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19373-9606/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0806 00:14:45.812282   61720 provision.go:87] duration metric: took 281.788709ms to configureAuth
	I0806 00:14:45.812310   61720 buildroot.go:189] setting minikube options for container-runtime
	I0806 00:14:45.812466   61720 config.go:182] Loaded profile config "kubernetes-upgrade-907863": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.20.0
	I0806 00:14:45.812527   61720 main.go:141] libmachine: (kubernetes-upgrade-907863) Calling .GetSSHHostname
	I0806 00:14:45.815951   61720 main.go:141] libmachine: (kubernetes-upgrade-907863) DBG | domain kubernetes-upgrade-907863 has defined MAC address 52:54:00:f6:6f:99 in network mk-kubernetes-upgrade-907863
	I0806 00:14:45.816375   61720 main.go:141] libmachine: (kubernetes-upgrade-907863) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f6:6f:99", ip: ""} in network mk-kubernetes-upgrade-907863: {Iface:virbr4 ExpiryTime:2024-08-06 01:14:38 +0000 UTC Type:0 Mac:52:54:00:f6:6f:99 Iaid: IPaddr:192.168.72.112 Prefix:24 Hostname:kubernetes-upgrade-907863 Clientid:01:52:54:00:f6:6f:99}
	I0806 00:14:45.816401   61720 main.go:141] libmachine: (kubernetes-upgrade-907863) DBG | domain kubernetes-upgrade-907863 has defined IP address 192.168.72.112 and MAC address 52:54:00:f6:6f:99 in network mk-kubernetes-upgrade-907863
	I0806 00:14:45.816598   61720 main.go:141] libmachine: (kubernetes-upgrade-907863) Calling .GetSSHPort
	I0806 00:14:45.816826   61720 main.go:141] libmachine: (kubernetes-upgrade-907863) Calling .GetSSHKeyPath
	I0806 00:14:45.816995   61720 main.go:141] libmachine: (kubernetes-upgrade-907863) Calling .GetSSHKeyPath
	I0806 00:14:45.817171   61720 main.go:141] libmachine: (kubernetes-upgrade-907863) Calling .GetSSHUsername
	I0806 00:14:45.817360   61720 main.go:141] libmachine: Using SSH client type: native
	I0806 00:14:45.817605   61720 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.72.112 22 <nil> <nil>}
	I0806 00:14:45.817633   61720 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0806 00:14:46.096742   61720 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0806 00:14:46.096779   61720 main.go:141] libmachine: Checking connection to Docker...
	I0806 00:14:46.096793   61720 main.go:141] libmachine: (kubernetes-upgrade-907863) Calling .GetURL
	I0806 00:14:46.098348   61720 main.go:141] libmachine: (kubernetes-upgrade-907863) DBG | Using libvirt version 6000000
	I0806 00:14:46.100964   61720 main.go:141] libmachine: (kubernetes-upgrade-907863) DBG | domain kubernetes-upgrade-907863 has defined MAC address 52:54:00:f6:6f:99 in network mk-kubernetes-upgrade-907863
	I0806 00:14:46.101255   61720 main.go:141] libmachine: (kubernetes-upgrade-907863) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f6:6f:99", ip: ""} in network mk-kubernetes-upgrade-907863: {Iface:virbr4 ExpiryTime:2024-08-06 01:14:38 +0000 UTC Type:0 Mac:52:54:00:f6:6f:99 Iaid: IPaddr:192.168.72.112 Prefix:24 Hostname:kubernetes-upgrade-907863 Clientid:01:52:54:00:f6:6f:99}
	I0806 00:14:46.101277   61720 main.go:141] libmachine: (kubernetes-upgrade-907863) DBG | domain kubernetes-upgrade-907863 has defined IP address 192.168.72.112 and MAC address 52:54:00:f6:6f:99 in network mk-kubernetes-upgrade-907863
	I0806 00:14:46.101439   61720 main.go:141] libmachine: Docker is up and running!
	I0806 00:14:46.101449   61720 main.go:141] libmachine: Reticulating splines...
	I0806 00:14:46.101457   61720 client.go:171] duration metric: took 23.613079714s to LocalClient.Create
	I0806 00:14:46.101483   61720 start.go:167] duration metric: took 23.613147049s to libmachine.API.Create "kubernetes-upgrade-907863"
	I0806 00:14:46.101494   61720 start.go:293] postStartSetup for "kubernetes-upgrade-907863" (driver="kvm2")
	I0806 00:14:46.101508   61720 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0806 00:14:46.101531   61720 main.go:141] libmachine: (kubernetes-upgrade-907863) Calling .DriverName
	I0806 00:14:46.101781   61720 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0806 00:14:46.101829   61720 main.go:141] libmachine: (kubernetes-upgrade-907863) Calling .GetSSHHostname
	I0806 00:14:46.104347   61720 main.go:141] libmachine: (kubernetes-upgrade-907863) DBG | domain kubernetes-upgrade-907863 has defined MAC address 52:54:00:f6:6f:99 in network mk-kubernetes-upgrade-907863
	I0806 00:14:46.104786   61720 main.go:141] libmachine: (kubernetes-upgrade-907863) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f6:6f:99", ip: ""} in network mk-kubernetes-upgrade-907863: {Iface:virbr4 ExpiryTime:2024-08-06 01:14:38 +0000 UTC Type:0 Mac:52:54:00:f6:6f:99 Iaid: IPaddr:192.168.72.112 Prefix:24 Hostname:kubernetes-upgrade-907863 Clientid:01:52:54:00:f6:6f:99}
	I0806 00:14:46.104813   61720 main.go:141] libmachine: (kubernetes-upgrade-907863) DBG | domain kubernetes-upgrade-907863 has defined IP address 192.168.72.112 and MAC address 52:54:00:f6:6f:99 in network mk-kubernetes-upgrade-907863
	I0806 00:14:46.105081   61720 main.go:141] libmachine: (kubernetes-upgrade-907863) Calling .GetSSHPort
	I0806 00:14:46.105257   61720 main.go:141] libmachine: (kubernetes-upgrade-907863) Calling .GetSSHKeyPath
	I0806 00:14:46.105445   61720 main.go:141] libmachine: (kubernetes-upgrade-907863) Calling .GetSSHUsername
	I0806 00:14:46.105604   61720 sshutil.go:53] new ssh client: &{IP:192.168.72.112 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19373-9606/.minikube/machines/kubernetes-upgrade-907863/id_rsa Username:docker}
	I0806 00:14:46.188914   61720 ssh_runner.go:195] Run: cat /etc/os-release
	I0806 00:14:46.193808   61720 info.go:137] Remote host: Buildroot 2023.02.9
	I0806 00:14:46.193837   61720 filesync.go:126] Scanning /home/jenkins/minikube-integration/19373-9606/.minikube/addons for local assets ...
	I0806 00:14:46.193939   61720 filesync.go:126] Scanning /home/jenkins/minikube-integration/19373-9606/.minikube/files for local assets ...
	I0806 00:14:46.194050   61720 filesync.go:149] local asset: /home/jenkins/minikube-integration/19373-9606/.minikube/files/etc/ssl/certs/167922.pem -> 167922.pem in /etc/ssl/certs
	I0806 00:14:46.194181   61720 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0806 00:14:46.208786   61720 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19373-9606/.minikube/files/etc/ssl/certs/167922.pem --> /etc/ssl/certs/167922.pem (1708 bytes)
	I0806 00:14:46.234274   61720 start.go:296] duration metric: took 132.765664ms for postStartSetup
	I0806 00:14:46.234326   61720 main.go:141] libmachine: (kubernetes-upgrade-907863) Calling .GetConfigRaw
	I0806 00:14:46.234938   61720 main.go:141] libmachine: (kubernetes-upgrade-907863) Calling .GetIP
	I0806 00:14:46.237911   61720 main.go:141] libmachine: (kubernetes-upgrade-907863) DBG | domain kubernetes-upgrade-907863 has defined MAC address 52:54:00:f6:6f:99 in network mk-kubernetes-upgrade-907863
	I0806 00:14:46.238167   61720 main.go:141] libmachine: (kubernetes-upgrade-907863) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f6:6f:99", ip: ""} in network mk-kubernetes-upgrade-907863: {Iface:virbr4 ExpiryTime:2024-08-06 01:14:38 +0000 UTC Type:0 Mac:52:54:00:f6:6f:99 Iaid: IPaddr:192.168.72.112 Prefix:24 Hostname:kubernetes-upgrade-907863 Clientid:01:52:54:00:f6:6f:99}
	I0806 00:14:46.238204   61720 main.go:141] libmachine: (kubernetes-upgrade-907863) DBG | domain kubernetes-upgrade-907863 has defined IP address 192.168.72.112 and MAC address 52:54:00:f6:6f:99 in network mk-kubernetes-upgrade-907863
	I0806 00:14:46.238390   61720 profile.go:143] Saving config to /home/jenkins/minikube-integration/19373-9606/.minikube/profiles/kubernetes-upgrade-907863/config.json ...
	I0806 00:14:46.238584   61720 start.go:128] duration metric: took 23.774163741s to createHost
	I0806 00:14:46.238611   61720 main.go:141] libmachine: (kubernetes-upgrade-907863) Calling .GetSSHHostname
	I0806 00:14:46.240741   61720 main.go:141] libmachine: (kubernetes-upgrade-907863) DBG | domain kubernetes-upgrade-907863 has defined MAC address 52:54:00:f6:6f:99 in network mk-kubernetes-upgrade-907863
	I0806 00:14:46.241026   61720 main.go:141] libmachine: (kubernetes-upgrade-907863) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f6:6f:99", ip: ""} in network mk-kubernetes-upgrade-907863: {Iface:virbr4 ExpiryTime:2024-08-06 01:14:38 +0000 UTC Type:0 Mac:52:54:00:f6:6f:99 Iaid: IPaddr:192.168.72.112 Prefix:24 Hostname:kubernetes-upgrade-907863 Clientid:01:52:54:00:f6:6f:99}
	I0806 00:14:46.241051   61720 main.go:141] libmachine: (kubernetes-upgrade-907863) DBG | domain kubernetes-upgrade-907863 has defined IP address 192.168.72.112 and MAC address 52:54:00:f6:6f:99 in network mk-kubernetes-upgrade-907863
	I0806 00:14:46.241251   61720 main.go:141] libmachine: (kubernetes-upgrade-907863) Calling .GetSSHPort
	I0806 00:14:46.241413   61720 main.go:141] libmachine: (kubernetes-upgrade-907863) Calling .GetSSHKeyPath
	I0806 00:14:46.241580   61720 main.go:141] libmachine: (kubernetes-upgrade-907863) Calling .GetSSHKeyPath
	I0806 00:14:46.241731   61720 main.go:141] libmachine: (kubernetes-upgrade-907863) Calling .GetSSHUsername
	I0806 00:14:46.241879   61720 main.go:141] libmachine: Using SSH client type: native
	I0806 00:14:46.242047   61720 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.72.112 22 <nil> <nil>}
	I0806 00:14:46.242056   61720 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0806 00:14:46.343871   61720 main.go:141] libmachine: SSH cmd err, output: <nil>: 1722903286.327466870
	
	I0806 00:14:46.343890   61720 fix.go:216] guest clock: 1722903286.327466870
	I0806 00:14:46.343897   61720 fix.go:229] Guest: 2024-08-06 00:14:46.32746687 +0000 UTC Remote: 2024-08-06 00:14:46.238596191 +0000 UTC m=+34.462085673 (delta=88.870679ms)
	I0806 00:14:46.343917   61720 fix.go:200] guest clock delta is within tolerance: 88.870679ms
	I0806 00:14:46.343921   61720 start.go:83] releasing machines lock for "kubernetes-upgrade-907863", held for 23.87968491s
	I0806 00:14:46.343942   61720 main.go:141] libmachine: (kubernetes-upgrade-907863) Calling .DriverName
	I0806 00:14:46.344223   61720 main.go:141] libmachine: (kubernetes-upgrade-907863) Calling .GetIP
	I0806 00:14:46.346864   61720 main.go:141] libmachine: (kubernetes-upgrade-907863) DBG | domain kubernetes-upgrade-907863 has defined MAC address 52:54:00:f6:6f:99 in network mk-kubernetes-upgrade-907863
	I0806 00:14:46.347365   61720 main.go:141] libmachine: (kubernetes-upgrade-907863) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f6:6f:99", ip: ""} in network mk-kubernetes-upgrade-907863: {Iface:virbr4 ExpiryTime:2024-08-06 01:14:38 +0000 UTC Type:0 Mac:52:54:00:f6:6f:99 Iaid: IPaddr:192.168.72.112 Prefix:24 Hostname:kubernetes-upgrade-907863 Clientid:01:52:54:00:f6:6f:99}
	I0806 00:14:46.347401   61720 main.go:141] libmachine: (kubernetes-upgrade-907863) DBG | domain kubernetes-upgrade-907863 has defined IP address 192.168.72.112 and MAC address 52:54:00:f6:6f:99 in network mk-kubernetes-upgrade-907863
	I0806 00:14:46.347607   61720 main.go:141] libmachine: (kubernetes-upgrade-907863) Calling .DriverName
	I0806 00:14:46.348173   61720 main.go:141] libmachine: (kubernetes-upgrade-907863) Calling .DriverName
	I0806 00:14:46.348392   61720 main.go:141] libmachine: (kubernetes-upgrade-907863) Calling .DriverName
	I0806 00:14:46.348501   61720 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0806 00:14:46.348545   61720 main.go:141] libmachine: (kubernetes-upgrade-907863) Calling .GetSSHHostname
	I0806 00:14:46.348623   61720 ssh_runner.go:195] Run: cat /version.json
	I0806 00:14:46.348646   61720 main.go:141] libmachine: (kubernetes-upgrade-907863) Calling .GetSSHHostname
	I0806 00:14:46.351386   61720 main.go:141] libmachine: (kubernetes-upgrade-907863) DBG | domain kubernetes-upgrade-907863 has defined MAC address 52:54:00:f6:6f:99 in network mk-kubernetes-upgrade-907863
	I0806 00:14:46.351533   61720 main.go:141] libmachine: (kubernetes-upgrade-907863) DBG | domain kubernetes-upgrade-907863 has defined MAC address 52:54:00:f6:6f:99 in network mk-kubernetes-upgrade-907863
	I0806 00:14:46.351746   61720 main.go:141] libmachine: (kubernetes-upgrade-907863) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f6:6f:99", ip: ""} in network mk-kubernetes-upgrade-907863: {Iface:virbr4 ExpiryTime:2024-08-06 01:14:38 +0000 UTC Type:0 Mac:52:54:00:f6:6f:99 Iaid: IPaddr:192.168.72.112 Prefix:24 Hostname:kubernetes-upgrade-907863 Clientid:01:52:54:00:f6:6f:99}
	I0806 00:14:46.351771   61720 main.go:141] libmachine: (kubernetes-upgrade-907863) DBG | domain kubernetes-upgrade-907863 has defined IP address 192.168.72.112 and MAC address 52:54:00:f6:6f:99 in network mk-kubernetes-upgrade-907863
	I0806 00:14:46.351895   61720 main.go:141] libmachine: (kubernetes-upgrade-907863) Calling .GetSSHPort
	I0806 00:14:46.352002   61720 main.go:141] libmachine: (kubernetes-upgrade-907863) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f6:6f:99", ip: ""} in network mk-kubernetes-upgrade-907863: {Iface:virbr4 ExpiryTime:2024-08-06 01:14:38 +0000 UTC Type:0 Mac:52:54:00:f6:6f:99 Iaid: IPaddr:192.168.72.112 Prefix:24 Hostname:kubernetes-upgrade-907863 Clientid:01:52:54:00:f6:6f:99}
	I0806 00:14:46.352025   61720 main.go:141] libmachine: (kubernetes-upgrade-907863) DBG | domain kubernetes-upgrade-907863 has defined IP address 192.168.72.112 and MAC address 52:54:00:f6:6f:99 in network mk-kubernetes-upgrade-907863
	I0806 00:14:46.352047   61720 main.go:141] libmachine: (kubernetes-upgrade-907863) Calling .GetSSHKeyPath
	I0806 00:14:46.352224   61720 main.go:141] libmachine: (kubernetes-upgrade-907863) Calling .GetSSHPort
	I0806 00:14:46.352225   61720 main.go:141] libmachine: (kubernetes-upgrade-907863) Calling .GetSSHUsername
	I0806 00:14:46.352398   61720 main.go:141] libmachine: (kubernetes-upgrade-907863) Calling .GetSSHKeyPath
	I0806 00:14:46.352410   61720 sshutil.go:53] new ssh client: &{IP:192.168.72.112 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19373-9606/.minikube/machines/kubernetes-upgrade-907863/id_rsa Username:docker}
	I0806 00:14:46.352531   61720 main.go:141] libmachine: (kubernetes-upgrade-907863) Calling .GetSSHUsername
	I0806 00:14:46.352676   61720 sshutil.go:53] new ssh client: &{IP:192.168.72.112 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19373-9606/.minikube/machines/kubernetes-upgrade-907863/id_rsa Username:docker}
	I0806 00:14:46.432018   61720 ssh_runner.go:195] Run: systemctl --version
	I0806 00:14:46.454693   61720 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0806 00:14:46.628607   61720 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0806 00:14:46.638675   61720 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0806 00:14:46.638749   61720 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0806 00:14:46.659007   61720 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0806 00:14:46.659034   61720 start.go:495] detecting cgroup driver to use...
	I0806 00:14:46.659142   61720 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0806 00:14:46.680151   61720 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0806 00:14:46.698336   61720 docker.go:217] disabling cri-docker service (if available) ...
	I0806 00:14:46.698504   61720 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0806 00:14:46.715093   61720 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0806 00:14:46.730157   61720 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0806 00:14:46.849640   61720 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0806 00:14:47.007622   61720 docker.go:233] disabling docker service ...
	I0806 00:14:47.007694   61720 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0806 00:14:47.022913   61720 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0806 00:14:47.037788   61720 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0806 00:14:47.172160   61720 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0806 00:14:47.297771   61720 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0806 00:14:47.315774   61720 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0806 00:14:47.335893   61720 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.2" pause image...
	I0806 00:14:47.335977   61720 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.2"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0806 00:14:47.350348   61720 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0806 00:14:47.350417   61720 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0806 00:14:47.362187   61720 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0806 00:14:47.375760   61720 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0806 00:14:47.388776   61720 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0806 00:14:47.401761   61720 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0806 00:14:47.412720   61720 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0806 00:14:47.412787   61720 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0806 00:14:47.428644   61720 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0806 00:14:47.440189   61720 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0806 00:14:47.553614   61720 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0806 00:14:47.698481   61720 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0806 00:14:47.698569   61720 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0806 00:14:47.703748   61720 start.go:563] Will wait 60s for crictl version
	I0806 00:14:47.703812   61720 ssh_runner.go:195] Run: which crictl
	I0806 00:14:47.708040   61720 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0806 00:14:47.749798   61720 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0806 00:14:47.749884   61720 ssh_runner.go:195] Run: crio --version
	I0806 00:14:47.779166   61720 ssh_runner.go:195] Run: crio --version
	I0806 00:14:47.812309   61720 out.go:177] * Preparing Kubernetes v1.20.0 on CRI-O 1.29.1 ...
	I0806 00:14:46.745487   62278 preload.go:132] Checking if preload exists for k8s version v1.24.1 and runtime crio
	I0806 00:14:46.745523   62278 preload.go:148] Found local preload: /home/jenkins/minikube-integration/19373-9606/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.24.1-cri-o-overlay-amd64.tar.lz4
	I0806 00:14:46.745529   62278 cache.go:57] Caching tarball of preloaded images
	I0806 00:14:46.745656   62278 preload.go:174] Found /home/jenkins/minikube-integration/19373-9606/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.24.1-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0806 00:14:46.745670   62278 cache.go:60] Finished verifying existence of preloaded tar for  v1.24.1 on crio
	I0806 00:14:46.745775   62278 profile.go:148] Saving config to /home/jenkins/minikube-integration/19373-9606/.minikube/profiles/stopped-upgrade-936666/config.json ...
	I0806 00:14:46.745792   62278 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19373-9606/.minikube/profiles/stopped-upgrade-936666/config.json: {Name:mk6c297a1f267f679d468f1e18f9a6917b08cdfc Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0806 00:14:46.745948   62278 cache.go:208] Successfully downloaded all kic artifacts
	I0806 00:14:46.745994   62278 start.go:352] acquiring machines lock for stopped-upgrade-936666: {Name:mkd2ba511c39504598222edbf83078b718329186 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0806 00:14:46.982836   62044 provision.go:177] copyRemoteCerts
	I0806 00:14:46.982898   62044 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0806 00:14:46.982922   62044 main.go:141] libmachine: (pause-161508) Calling .GetSSHHostname
	I0806 00:14:46.985958   62044 main.go:141] libmachine: (pause-161508) DBG | domain pause-161508 has defined MAC address 52:54:00:33:8e:29 in network mk-pause-161508
	I0806 00:14:46.986362   62044 main.go:141] libmachine: (pause-161508) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:33:8e:29", ip: ""} in network mk-pause-161508: {Iface:virbr3 ExpiryTime:2024-08-06 01:13:19 +0000 UTC Type:0 Mac:52:54:00:33:8e:29 Iaid: IPaddr:192.168.39.118 Prefix:24 Hostname:pause-161508 Clientid:01:52:54:00:33:8e:29}
	I0806 00:14:46.986394   62044 main.go:141] libmachine: (pause-161508) DBG | domain pause-161508 has defined IP address 192.168.39.118 and MAC address 52:54:00:33:8e:29 in network mk-pause-161508
	I0806 00:14:46.986557   62044 main.go:141] libmachine: (pause-161508) Calling .GetSSHPort
	I0806 00:14:46.986805   62044 main.go:141] libmachine: (pause-161508) Calling .GetSSHKeyPath
	I0806 00:14:46.986991   62044 main.go:141] libmachine: (pause-161508) Calling .GetSSHUsername
	I0806 00:14:46.987143   62044 sshutil.go:53] new ssh client: &{IP:192.168.39.118 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19373-9606/.minikube/machines/pause-161508/id_rsa Username:docker}
	I0806 00:14:47.070447   62044 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19373-9606/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0806 00:14:47.109142   62044 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19373-9606/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I0806 00:14:47.137301   62044 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19373-9606/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0806 00:14:47.164786   62044 provision.go:87] duration metric: took 412.109539ms to configureAuth
	I0806 00:14:47.164817   62044 buildroot.go:189] setting minikube options for container-runtime
	I0806 00:14:47.165068   62044 config.go:182] Loaded profile config "pause-161508": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0806 00:14:47.165146   62044 main.go:141] libmachine: (pause-161508) Calling .GetSSHHostname
	I0806 00:14:47.168216   62044 main.go:141] libmachine: (pause-161508) DBG | domain pause-161508 has defined MAC address 52:54:00:33:8e:29 in network mk-pause-161508
	I0806 00:14:47.168569   62044 main.go:141] libmachine: (pause-161508) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:33:8e:29", ip: ""} in network mk-pause-161508: {Iface:virbr3 ExpiryTime:2024-08-06 01:13:19 +0000 UTC Type:0 Mac:52:54:00:33:8e:29 Iaid: IPaddr:192.168.39.118 Prefix:24 Hostname:pause-161508 Clientid:01:52:54:00:33:8e:29}
	I0806 00:14:47.168631   62044 main.go:141] libmachine: (pause-161508) DBG | domain pause-161508 has defined IP address 192.168.39.118 and MAC address 52:54:00:33:8e:29 in network mk-pause-161508
	I0806 00:14:47.168795   62044 main.go:141] libmachine: (pause-161508) Calling .GetSSHPort
	I0806 00:14:47.169007   62044 main.go:141] libmachine: (pause-161508) Calling .GetSSHKeyPath
	I0806 00:14:47.169210   62044 main.go:141] libmachine: (pause-161508) Calling .GetSSHKeyPath
	I0806 00:14:47.169368   62044 main.go:141] libmachine: (pause-161508) Calling .GetSSHUsername
	I0806 00:14:47.169555   62044 main.go:141] libmachine: Using SSH client type: native
	I0806 00:14:47.169746   62044 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.118 22 <nil> <nil>}
	I0806 00:14:47.169767   62044 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0806 00:14:47.813708   61720 main.go:141] libmachine: (kubernetes-upgrade-907863) Calling .GetIP
	I0806 00:14:47.816108   61720 main.go:141] libmachine: (kubernetes-upgrade-907863) DBG | domain kubernetes-upgrade-907863 has defined MAC address 52:54:00:f6:6f:99 in network mk-kubernetes-upgrade-907863
	I0806 00:14:47.816440   61720 main.go:141] libmachine: (kubernetes-upgrade-907863) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f6:6f:99", ip: ""} in network mk-kubernetes-upgrade-907863: {Iface:virbr4 ExpiryTime:2024-08-06 01:14:38 +0000 UTC Type:0 Mac:52:54:00:f6:6f:99 Iaid: IPaddr:192.168.72.112 Prefix:24 Hostname:kubernetes-upgrade-907863 Clientid:01:52:54:00:f6:6f:99}
	I0806 00:14:47.816466   61720 main.go:141] libmachine: (kubernetes-upgrade-907863) DBG | domain kubernetes-upgrade-907863 has defined IP address 192.168.72.112 and MAC address 52:54:00:f6:6f:99 in network mk-kubernetes-upgrade-907863
	I0806 00:14:47.816644   61720 ssh_runner.go:195] Run: grep 192.168.72.1	host.minikube.internal$ /etc/hosts
	I0806 00:14:47.821182   61720 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.72.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0806 00:14:47.834308   61720 kubeadm.go:883] updating cluster {Name:kubernetes-upgrade-907863 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19339/minikube-v1.33.1-1722248113-19339-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVe
rsion:v1.20.0 ClusterName:kubernetes-upgrade-907863 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.72.112 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimi
zations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0806 00:14:47.834420   61720 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime crio
	I0806 00:14:47.834474   61720 ssh_runner.go:195] Run: sudo crictl images --output json
	I0806 00:14:47.868197   61720 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.20.0". assuming images are not preloaded.
	I0806 00:14:47.868274   61720 ssh_runner.go:195] Run: which lz4
	I0806 00:14:47.872506   61720 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I0806 00:14:47.877108   61720 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0806 00:14:47.877144   61720 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19373-9606/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (473237281 bytes)
	I0806 00:14:49.556736   61720 crio.go:462] duration metric: took 1.684254918s to copy over tarball
	I0806 00:14:49.556831   61720 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0806 00:14:53.014036   62278 start.go:356] acquired machines lock for "stopped-upgrade-936666" in 6.268019049s
	I0806 00:14:53.014087   62278 start.go:91] Provisioning new machine with config: &{Name:stopped-upgrade-936666 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.26.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.32@sha256:9190bd2393eae887316c97a74370b7d5dad8f0b2ef91ac2662bc36f7ef8e0b95 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.24.1 ClusterName:stopp
ed-upgrade-936666 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.24.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:fal
se CustomQemuFirmwarePath:} &{Name: IP: Port:8443 KubernetesVersion:v1.24.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0806 00:14:53.014197   62278 start.go:131] createHost starting for "" (driver="kvm2")
	I0806 00:14:52.768207   62044 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0806 00:14:52.768237   62044 machine.go:97] duration metric: took 6.398573772s to provisionDockerMachine
	I0806 00:14:52.768252   62044 start.go:293] postStartSetup for "pause-161508" (driver="kvm2")
	I0806 00:14:52.768266   62044 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0806 00:14:52.768286   62044 main.go:141] libmachine: (pause-161508) Calling .DriverName
	I0806 00:14:52.768771   62044 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0806 00:14:52.768800   62044 main.go:141] libmachine: (pause-161508) Calling .GetSSHHostname
	I0806 00:14:52.772553   62044 main.go:141] libmachine: (pause-161508) DBG | domain pause-161508 has defined MAC address 52:54:00:33:8e:29 in network mk-pause-161508
	I0806 00:14:52.773026   62044 main.go:141] libmachine: (pause-161508) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:33:8e:29", ip: ""} in network mk-pause-161508: {Iface:virbr3 ExpiryTime:2024-08-06 01:13:19 +0000 UTC Type:0 Mac:52:54:00:33:8e:29 Iaid: IPaddr:192.168.39.118 Prefix:24 Hostname:pause-161508 Clientid:01:52:54:00:33:8e:29}
	I0806 00:14:52.773057   62044 main.go:141] libmachine: (pause-161508) DBG | domain pause-161508 has defined IP address 192.168.39.118 and MAC address 52:54:00:33:8e:29 in network mk-pause-161508
	I0806 00:14:52.773385   62044 main.go:141] libmachine: (pause-161508) Calling .GetSSHPort
	I0806 00:14:52.773599   62044 main.go:141] libmachine: (pause-161508) Calling .GetSSHKeyPath
	I0806 00:14:52.773756   62044 main.go:141] libmachine: (pause-161508) Calling .GetSSHUsername
	I0806 00:14:52.774022   62044 sshutil.go:53] new ssh client: &{IP:192.168.39.118 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19373-9606/.minikube/machines/pause-161508/id_rsa Username:docker}
	I0806 00:14:52.858402   62044 ssh_runner.go:195] Run: cat /etc/os-release
	I0806 00:14:52.864471   62044 info.go:137] Remote host: Buildroot 2023.02.9
	I0806 00:14:52.864505   62044 filesync.go:126] Scanning /home/jenkins/minikube-integration/19373-9606/.minikube/addons for local assets ...
	I0806 00:14:52.864570   62044 filesync.go:126] Scanning /home/jenkins/minikube-integration/19373-9606/.minikube/files for local assets ...
	I0806 00:14:52.864674   62044 filesync.go:149] local asset: /home/jenkins/minikube-integration/19373-9606/.minikube/files/etc/ssl/certs/167922.pem -> 167922.pem in /etc/ssl/certs
	I0806 00:14:52.864774   62044 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0806 00:14:52.875107   62044 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19373-9606/.minikube/files/etc/ssl/certs/167922.pem --> /etc/ssl/certs/167922.pem (1708 bytes)
	I0806 00:14:52.902994   62044 start.go:296] duration metric: took 134.72929ms for postStartSetup
	I0806 00:14:52.903034   62044 fix.go:56] duration metric: took 6.558964017s for fixHost
	I0806 00:14:52.903069   62044 main.go:141] libmachine: (pause-161508) Calling .GetSSHHostname
	I0806 00:14:52.905787   62044 main.go:141] libmachine: (pause-161508) DBG | domain pause-161508 has defined MAC address 52:54:00:33:8e:29 in network mk-pause-161508
	I0806 00:14:52.906163   62044 main.go:141] libmachine: (pause-161508) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:33:8e:29", ip: ""} in network mk-pause-161508: {Iface:virbr3 ExpiryTime:2024-08-06 01:13:19 +0000 UTC Type:0 Mac:52:54:00:33:8e:29 Iaid: IPaddr:192.168.39.118 Prefix:24 Hostname:pause-161508 Clientid:01:52:54:00:33:8e:29}
	I0806 00:14:52.906193   62044 main.go:141] libmachine: (pause-161508) DBG | domain pause-161508 has defined IP address 192.168.39.118 and MAC address 52:54:00:33:8e:29 in network mk-pause-161508
	I0806 00:14:52.906354   62044 main.go:141] libmachine: (pause-161508) Calling .GetSSHPort
	I0806 00:14:52.906552   62044 main.go:141] libmachine: (pause-161508) Calling .GetSSHKeyPath
	I0806 00:14:52.906724   62044 main.go:141] libmachine: (pause-161508) Calling .GetSSHKeyPath
	I0806 00:14:52.906870   62044 main.go:141] libmachine: (pause-161508) Calling .GetSSHUsername
	I0806 00:14:52.907046   62044 main.go:141] libmachine: Using SSH client type: native
	I0806 00:14:52.907265   62044 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.118 22 <nil> <nil>}
	I0806 00:14:52.907276   62044 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0806 00:14:53.013802   62044 main.go:141] libmachine: SSH cmd err, output: <nil>: 1722903293.008614911
	
	I0806 00:14:53.013833   62044 fix.go:216] guest clock: 1722903293.008614911
	I0806 00:14:53.013843   62044 fix.go:229] Guest: 2024-08-06 00:14:53.008614911 +0000 UTC Remote: 2024-08-06 00:14:52.903038034 +0000 UTC m=+11.159767359 (delta=105.576877ms)
	I0806 00:14:53.013868   62044 fix.go:200] guest clock delta is within tolerance: 105.576877ms
	I0806 00:14:53.013875   62044 start.go:83] releasing machines lock for "pause-161508", held for 6.669834897s
	I0806 00:14:53.013902   62044 main.go:141] libmachine: (pause-161508) Calling .DriverName
	I0806 00:14:53.014197   62044 main.go:141] libmachine: (pause-161508) Calling .GetIP
	I0806 00:14:53.017386   62044 main.go:141] libmachine: (pause-161508) DBG | domain pause-161508 has defined MAC address 52:54:00:33:8e:29 in network mk-pause-161508
	I0806 00:14:53.017783   62044 main.go:141] libmachine: (pause-161508) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:33:8e:29", ip: ""} in network mk-pause-161508: {Iface:virbr3 ExpiryTime:2024-08-06 01:13:19 +0000 UTC Type:0 Mac:52:54:00:33:8e:29 Iaid: IPaddr:192.168.39.118 Prefix:24 Hostname:pause-161508 Clientid:01:52:54:00:33:8e:29}
	I0806 00:14:53.017818   62044 main.go:141] libmachine: (pause-161508) DBG | domain pause-161508 has defined IP address 192.168.39.118 and MAC address 52:54:00:33:8e:29 in network mk-pause-161508
	I0806 00:14:53.017973   62044 main.go:141] libmachine: (pause-161508) Calling .DriverName
	I0806 00:14:53.018597   62044 main.go:141] libmachine: (pause-161508) Calling .DriverName
	I0806 00:14:53.018807   62044 main.go:141] libmachine: (pause-161508) Calling .DriverName
	I0806 00:14:53.018919   62044 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0806 00:14:53.018957   62044 main.go:141] libmachine: (pause-161508) Calling .GetSSHHostname
	I0806 00:14:53.018977   62044 ssh_runner.go:195] Run: cat /version.json
	I0806 00:14:53.019001   62044 main.go:141] libmachine: (pause-161508) Calling .GetSSHHostname
	I0806 00:14:53.021792   62044 main.go:141] libmachine: (pause-161508) DBG | domain pause-161508 has defined MAC address 52:54:00:33:8e:29 in network mk-pause-161508
	I0806 00:14:53.022186   62044 main.go:141] libmachine: (pause-161508) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:33:8e:29", ip: ""} in network mk-pause-161508: {Iface:virbr3 ExpiryTime:2024-08-06 01:13:19 +0000 UTC Type:0 Mac:52:54:00:33:8e:29 Iaid: IPaddr:192.168.39.118 Prefix:24 Hostname:pause-161508 Clientid:01:52:54:00:33:8e:29}
	I0806 00:14:53.022211   62044 main.go:141] libmachine: (pause-161508) DBG | domain pause-161508 has defined IP address 192.168.39.118 and MAC address 52:54:00:33:8e:29 in network mk-pause-161508
	I0806 00:14:53.022232   62044 main.go:141] libmachine: (pause-161508) DBG | domain pause-161508 has defined MAC address 52:54:00:33:8e:29 in network mk-pause-161508
	I0806 00:14:53.022456   62044 main.go:141] libmachine: (pause-161508) Calling .GetSSHPort
	I0806 00:14:53.022705   62044 main.go:141] libmachine: (pause-161508) Calling .GetSSHKeyPath
	I0806 00:14:53.022735   62044 main.go:141] libmachine: (pause-161508) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:33:8e:29", ip: ""} in network mk-pause-161508: {Iface:virbr3 ExpiryTime:2024-08-06 01:13:19 +0000 UTC Type:0 Mac:52:54:00:33:8e:29 Iaid: IPaddr:192.168.39.118 Prefix:24 Hostname:pause-161508 Clientid:01:52:54:00:33:8e:29}
	I0806 00:14:53.022761   62044 main.go:141] libmachine: (pause-161508) DBG | domain pause-161508 has defined IP address 192.168.39.118 and MAC address 52:54:00:33:8e:29 in network mk-pause-161508
	I0806 00:14:53.022922   62044 main.go:141] libmachine: (pause-161508) Calling .GetSSHUsername
	I0806 00:14:53.022980   62044 main.go:141] libmachine: (pause-161508) Calling .GetSSHPort
	I0806 00:14:53.023161   62044 sshutil.go:53] new ssh client: &{IP:192.168.39.118 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19373-9606/.minikube/machines/pause-161508/id_rsa Username:docker}
	I0806 00:14:53.023305   62044 main.go:141] libmachine: (pause-161508) Calling .GetSSHKeyPath
	I0806 00:14:53.023458   62044 main.go:141] libmachine: (pause-161508) Calling .GetSSHUsername
	I0806 00:14:53.023609   62044 sshutil.go:53] new ssh client: &{IP:192.168.39.118 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19373-9606/.minikube/machines/pause-161508/id_rsa Username:docker}
	I0806 00:14:53.100662   62044 ssh_runner.go:195] Run: systemctl --version
	I0806 00:14:53.127867   62044 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0806 00:14:53.286775   62044 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0806 00:14:53.298035   62044 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0806 00:14:53.298114   62044 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0806 00:14:53.310993   62044 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I0806 00:14:53.311022   62044 start.go:495] detecting cgroup driver to use...
	I0806 00:14:53.311132   62044 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0806 00:14:53.334023   62044 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0806 00:14:53.356468   62044 docker.go:217] disabling cri-docker service (if available) ...
	I0806 00:14:53.356540   62044 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0806 00:14:53.376425   62044 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0806 00:14:53.395643   62044 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0806 00:14:53.562361   62044 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0806 00:14:53.750649   62044 docker.go:233] disabling docker service ...
	I0806 00:14:53.750739   62044 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0806 00:14:53.770975   62044 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0806 00:14:53.787945   62044 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0806 00:14:53.968997   62044 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0806 00:14:54.130928   62044 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0806 00:14:54.149110   62044 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0806 00:14:54.171520   62044 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I0806 00:14:54.171594   62044 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0806 00:14:54.184680   62044 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0806 00:14:54.184743   62044 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0806 00:14:54.197904   62044 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0806 00:14:54.210910   62044 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0806 00:14:54.225671   62044 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0806 00:14:54.238316   62044 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0806 00:14:54.251342   62044 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0806 00:14:54.263693   62044 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0806 00:14:54.275743   62044 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0806 00:14:54.286930   62044 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0806 00:14:54.298370   62044 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0806 00:14:54.444776   62044 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0806 00:14:55.565977   62044 ssh_runner.go:235] Completed: sudo systemctl restart crio: (1.121158698s)
	I0806 00:14:55.566012   62044 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0806 00:14:55.566063   62044 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0806 00:14:55.572805   62044 start.go:563] Will wait 60s for crictl version
	I0806 00:14:55.572876   62044 ssh_runner.go:195] Run: which crictl
	I0806 00:14:55.578202   62044 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0806 00:14:55.634895   62044 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0806 00:14:55.634995   62044 ssh_runner.go:195] Run: crio --version
	I0806 00:14:55.675524   62044 ssh_runner.go:195] Run: crio --version
	I0806 00:14:55.718988   62044 out.go:177] * Preparing Kubernetes v1.30.3 on CRI-O 1.29.1 ...
	I0806 00:14:53.098821   62278 out.go:204] * Creating kvm2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0806 00:14:53.099126   62278 main.go:134] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0806 00:14:53.099171   62278 main.go:134] libmachine: Launching plugin server for driver kvm2
	I0806 00:14:53.118437   62278 main.go:134] libmachine: Plugin server listening at address 127.0.0.1:42441
	I0806 00:14:53.118853   62278 main.go:134] libmachine: () Calling .GetVersion
	I0806 00:14:53.119475   62278 main.go:134] libmachine: Using API Version  1
	I0806 00:14:53.119494   62278 main.go:134] libmachine: () Calling .SetConfigRaw
	I0806 00:14:53.119844   62278 main.go:134] libmachine: () Calling .GetMachineName
	I0806 00:14:53.120063   62278 main.go:134] libmachine: (stopped-upgrade-936666) Calling .GetMachineName
	I0806 00:14:53.120245   62278 main.go:134] libmachine: (stopped-upgrade-936666) Calling .DriverName
	I0806 00:14:53.120406   62278 start.go:165] libmachine.API.Create for "stopped-upgrade-936666" (driver="kvm2")
	I0806 00:14:53.120426   62278 client.go:168] LocalClient.Create starting
	I0806 00:14:53.120453   62278 main.go:134] libmachine: Reading certificate data from /home/jenkins/minikube-integration/19373-9606/.minikube/certs/ca.pem
	I0806 00:14:53.120481   62278 main.go:134] libmachine: Decoding PEM data...
	I0806 00:14:53.120499   62278 main.go:134] libmachine: Parsing certificate...
	I0806 00:14:53.120562   62278 main.go:134] libmachine: Reading certificate data from /home/jenkins/minikube-integration/19373-9606/.minikube/certs/cert.pem
	I0806 00:14:53.120575   62278 main.go:134] libmachine: Decoding PEM data...
	I0806 00:14:53.120583   62278 main.go:134] libmachine: Parsing certificate...
	I0806 00:14:53.120596   62278 main.go:134] libmachine: Running pre-create checks...
	I0806 00:14:53.120602   62278 main.go:134] libmachine: (stopped-upgrade-936666) Calling .PreCreateCheck
	I0806 00:14:53.121013   62278 main.go:134] libmachine: (stopped-upgrade-936666) Calling .GetConfigRaw
	I0806 00:14:53.121506   62278 main.go:134] libmachine: Creating machine...
	I0806 00:14:53.121515   62278 main.go:134] libmachine: (stopped-upgrade-936666) Calling .Create
	I0806 00:14:53.121673   62278 main.go:134] libmachine: (stopped-upgrade-936666) Creating KVM machine...
	I0806 00:14:53.123023   62278 main.go:134] libmachine: (stopped-upgrade-936666) DBG | found existing default KVM network
	I0806 00:14:53.124320   62278 main.go:134] libmachine: (stopped-upgrade-936666) DBG | I0806 00:14:53.124156   62316 network.go:211] skipping subnet 192.168.39.0/24 that is taken: &{IP:192.168.39.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.39.0/24 Gateway:192.168.39.1 ClientMin:192.168.39.2 ClientMax:192.168.39.254 Broadcast:192.168.39.255 IsPrivate:true Interface:{IfaceName:virbr3 IfaceIPv4:192.168.39.1 IfaceMTU:1500 IfaceMAC:52:54:00:48:19:b6} reservation:<nil>}
	I0806 00:14:53.125074   62278 main.go:134] libmachine: (stopped-upgrade-936666) DBG | I0806 00:14:53.124977   62316 network.go:211] skipping subnet 192.168.50.0/24 that is taken: &{IP:192.168.50.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.50.0/24 Gateway:192.168.50.1 ClientMin:192.168.50.2 ClientMax:192.168.50.254 Broadcast:192.168.50.255 IsPrivate:true Interface:{IfaceName:virbr2 IfaceIPv4:192.168.50.1 IfaceMTU:1500 IfaceMAC:52:54:00:9b:c7:ec} reservation:<nil>}
	I0806 00:14:53.126154   62278 main.go:134] libmachine: (stopped-upgrade-936666) DBG | I0806 00:14:53.126069   62316 network.go:206] using free private subnet 192.168.61.0/24: &{IP:192.168.61.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.61.0/24 Gateway:192.168.61.1 ClientMin:192.168.61.2 ClientMax:192.168.61.254 Broadcast:192.168.61.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc0002890f0}
	I0806 00:14:53.126182   62278 main.go:134] libmachine: (stopped-upgrade-936666) DBG | created network xml: 
	I0806 00:14:53.126200   62278 main.go:134] libmachine: (stopped-upgrade-936666) DBG | <network>
	I0806 00:14:53.126208   62278 main.go:134] libmachine: (stopped-upgrade-936666) DBG |   <name>mk-stopped-upgrade-936666</name>
	I0806 00:14:53.126214   62278 main.go:134] libmachine: (stopped-upgrade-936666) DBG |   <dns enable='no'/>
	I0806 00:14:53.126220   62278 main.go:134] libmachine: (stopped-upgrade-936666) DBG |   
	I0806 00:14:53.126225   62278 main.go:134] libmachine: (stopped-upgrade-936666) DBG |   <ip address='192.168.61.1' netmask='255.255.255.0'>
	I0806 00:14:53.126231   62278 main.go:134] libmachine: (stopped-upgrade-936666) DBG |     <dhcp>
	I0806 00:14:53.126237   62278 main.go:134] libmachine: (stopped-upgrade-936666) DBG |       <range start='192.168.61.2' end='192.168.61.253'/>
	I0806 00:14:53.126247   62278 main.go:134] libmachine: (stopped-upgrade-936666) DBG |     </dhcp>
	I0806 00:14:53.126254   62278 main.go:134] libmachine: (stopped-upgrade-936666) DBG |   </ip>
	I0806 00:14:53.126261   62278 main.go:134] libmachine: (stopped-upgrade-936666) DBG |   
	I0806 00:14:53.126268   62278 main.go:134] libmachine: (stopped-upgrade-936666) DBG | </network>
	I0806 00:14:53.126277   62278 main.go:134] libmachine: (stopped-upgrade-936666) DBG | 
	I0806 00:14:53.245778   62278 main.go:134] libmachine: (stopped-upgrade-936666) DBG | trying to create private KVM network mk-stopped-upgrade-936666 192.168.61.0/24...
	I0806 00:14:53.321935   62278 main.go:134] libmachine: (stopped-upgrade-936666) DBG | private KVM network mk-stopped-upgrade-936666 192.168.61.0/24 created
	I0806 00:14:53.322043   62278 main.go:134] libmachine: (stopped-upgrade-936666) Setting up store path in /home/jenkins/minikube-integration/19373-9606/.minikube/machines/stopped-upgrade-936666 ...
	I0806 00:14:53.322189   62278 main.go:134] libmachine: (stopped-upgrade-936666) Building disk image from file:///home/jenkins/minikube-integration/19373-9606/.minikube/cache/iso/amd64/minikube-v1.26.0-amd64.iso
	I0806 00:14:53.322216   62278 main.go:134] libmachine: (stopped-upgrade-936666) DBG | I0806 00:14:53.322117   62316 common.go:145] Making disk image using store path: /home/jenkins/minikube-integration/19373-9606/.minikube
	I0806 00:14:53.322308   62278 main.go:134] libmachine: (stopped-upgrade-936666) Downloading /home/jenkins/minikube-integration/19373-9606/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/19373-9606/.minikube/cache/iso/amd64/minikube-v1.26.0-amd64.iso...
	I0806 00:14:53.539956   62278 main.go:134] libmachine: (stopped-upgrade-936666) DBG | I0806 00:14:53.539797   62316 common.go:152] Creating ssh key: /home/jenkins/minikube-integration/19373-9606/.minikube/machines/stopped-upgrade-936666/id_rsa...
	I0806 00:14:53.598845   62278 main.go:134] libmachine: (stopped-upgrade-936666) DBG | I0806 00:14:53.598675   62316 common.go:158] Creating raw disk image: /home/jenkins/minikube-integration/19373-9606/.minikube/machines/stopped-upgrade-936666/stopped-upgrade-936666.rawdisk...
	I0806 00:14:53.598871   62278 main.go:134] libmachine: (stopped-upgrade-936666) DBG | Writing magic tar header
	I0806 00:14:53.598892   62278 main.go:134] libmachine: (stopped-upgrade-936666) DBG | Writing SSH key tar header
	I0806 00:14:53.598907   62278 main.go:134] libmachine: (stopped-upgrade-936666) DBG | I0806 00:14:53.598785   62316 common.go:172] Fixing permissions on /home/jenkins/minikube-integration/19373-9606/.minikube/machines/stopped-upgrade-936666 ...
	I0806 00:14:53.598921   62278 main.go:134] libmachine: (stopped-upgrade-936666) Setting executable bit set on /home/jenkins/minikube-integration/19373-9606/.minikube/machines/stopped-upgrade-936666 (perms=drwx------)
	I0806 00:14:53.598933   62278 main.go:134] libmachine: (stopped-upgrade-936666) Setting executable bit set on /home/jenkins/minikube-integration/19373-9606/.minikube/machines (perms=drwxr-xr-x)
	I0806 00:14:53.598940   62278 main.go:134] libmachine: (stopped-upgrade-936666) Setting executable bit set on /home/jenkins/minikube-integration/19373-9606/.minikube (perms=drwxr-xr-x)
	I0806 00:14:53.598948   62278 main.go:134] libmachine: (stopped-upgrade-936666) Setting executable bit set on /home/jenkins/minikube-integration/19373-9606 (perms=drwxrwxr-x)
	I0806 00:14:53.598956   62278 main.go:134] libmachine: (stopped-upgrade-936666) Setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I0806 00:14:53.598969   62278 main.go:134] libmachine: (stopped-upgrade-936666) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19373-9606/.minikube/machines/stopped-upgrade-936666
	I0806 00:14:53.598993   62278 main.go:134] libmachine: (stopped-upgrade-936666) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19373-9606/.minikube/machines
	I0806 00:14:53.599003   62278 main.go:134] libmachine: (stopped-upgrade-936666) Setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I0806 00:14:53.599009   62278 main.go:134] libmachine: (stopped-upgrade-936666) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19373-9606/.minikube
	I0806 00:14:53.599019   62278 main.go:134] libmachine: (stopped-upgrade-936666) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19373-9606
	I0806 00:14:53.599035   62278 main.go:134] libmachine: (stopped-upgrade-936666) DBG | Checking permissions on dir: /home/jenkins/minikube-integration
	I0806 00:14:53.599040   62278 main.go:134] libmachine: (stopped-upgrade-936666) Creating domain...
	I0806 00:14:53.599100   62278 main.go:134] libmachine: (stopped-upgrade-936666) DBG | Checking permissions on dir: /home/jenkins
	I0806 00:14:53.599121   62278 main.go:134] libmachine: (stopped-upgrade-936666) DBG | Checking permissions on dir: /home
	I0806 00:14:53.599139   62278 main.go:134] libmachine: (stopped-upgrade-936666) DBG | Skipping /home - not owner
	I0806 00:14:53.600390   62278 main.go:134] libmachine: (stopped-upgrade-936666) define libvirt domain using xml: 
	I0806 00:14:53.600412   62278 main.go:134] libmachine: (stopped-upgrade-936666) <domain type='kvm'>
	I0806 00:14:53.600423   62278 main.go:134] libmachine: (stopped-upgrade-936666)   <name>stopped-upgrade-936666</name>
	I0806 00:14:53.600434   62278 main.go:134] libmachine: (stopped-upgrade-936666)   <memory unit='MiB'>2200</memory>
	I0806 00:14:53.600440   62278 main.go:134] libmachine: (stopped-upgrade-936666)   <vcpu>2</vcpu>
	I0806 00:14:53.600449   62278 main.go:134] libmachine: (stopped-upgrade-936666)   <features>
	I0806 00:14:53.600454   62278 main.go:134] libmachine: (stopped-upgrade-936666)     <acpi/>
	I0806 00:14:53.600459   62278 main.go:134] libmachine: (stopped-upgrade-936666)     <apic/>
	I0806 00:14:53.600464   62278 main.go:134] libmachine: (stopped-upgrade-936666)     <pae/>
	I0806 00:14:53.600468   62278 main.go:134] libmachine: (stopped-upgrade-936666)     
	I0806 00:14:53.600474   62278 main.go:134] libmachine: (stopped-upgrade-936666)   </features>
	I0806 00:14:53.600479   62278 main.go:134] libmachine: (stopped-upgrade-936666)   <cpu mode='host-passthrough'>
	I0806 00:14:53.600484   62278 main.go:134] libmachine: (stopped-upgrade-936666)   
	I0806 00:14:53.600488   62278 main.go:134] libmachine: (stopped-upgrade-936666)   </cpu>
	I0806 00:14:53.600493   62278 main.go:134] libmachine: (stopped-upgrade-936666)   <os>
	I0806 00:14:53.600497   62278 main.go:134] libmachine: (stopped-upgrade-936666)     <type>hvm</type>
	I0806 00:14:53.600503   62278 main.go:134] libmachine: (stopped-upgrade-936666)     <boot dev='cdrom'/>
	I0806 00:14:53.600507   62278 main.go:134] libmachine: (stopped-upgrade-936666)     <boot dev='hd'/>
	I0806 00:14:53.600512   62278 main.go:134] libmachine: (stopped-upgrade-936666)     <bootmenu enable='no'/>
	I0806 00:14:53.600518   62278 main.go:134] libmachine: (stopped-upgrade-936666)   </os>
	I0806 00:14:53.600523   62278 main.go:134] libmachine: (stopped-upgrade-936666)   <devices>
	I0806 00:14:53.600528   62278 main.go:134] libmachine: (stopped-upgrade-936666)     <disk type='file' device='cdrom'>
	I0806 00:14:53.600538   62278 main.go:134] libmachine: (stopped-upgrade-936666)       <source file='/home/jenkins/minikube-integration/19373-9606/.minikube/machines/stopped-upgrade-936666/boot2docker.iso'/>
	I0806 00:14:53.600542   62278 main.go:134] libmachine: (stopped-upgrade-936666)       <target dev='hdc' bus='scsi'/>
	I0806 00:14:53.600548   62278 main.go:134] libmachine: (stopped-upgrade-936666)       <readonly/>
	I0806 00:14:53.600552   62278 main.go:134] libmachine: (stopped-upgrade-936666)     </disk>
	I0806 00:14:53.600558   62278 main.go:134] libmachine: (stopped-upgrade-936666)     <disk type='file' device='disk'>
	I0806 00:14:53.600564   62278 main.go:134] libmachine: (stopped-upgrade-936666)       <driver name='qemu' type='raw' cache='default' io='threads' />
	I0806 00:14:53.600587   62278 main.go:134] libmachine: (stopped-upgrade-936666)       <source file='/home/jenkins/minikube-integration/19373-9606/.minikube/machines/stopped-upgrade-936666/stopped-upgrade-936666.rawdisk'/>
	I0806 00:14:53.600597   62278 main.go:134] libmachine: (stopped-upgrade-936666)       <target dev='hda' bus='virtio'/>
	I0806 00:14:53.600602   62278 main.go:134] libmachine: (stopped-upgrade-936666)     </disk>
	I0806 00:14:53.600608   62278 main.go:134] libmachine: (stopped-upgrade-936666)     <interface type='network'>
	I0806 00:14:53.600615   62278 main.go:134] libmachine: (stopped-upgrade-936666)       <source network='mk-stopped-upgrade-936666'/>
	I0806 00:14:53.600620   62278 main.go:134] libmachine: (stopped-upgrade-936666)       <model type='virtio'/>
	I0806 00:14:53.600625   62278 main.go:134] libmachine: (stopped-upgrade-936666)     </interface>
	I0806 00:14:53.600633   62278 main.go:134] libmachine: (stopped-upgrade-936666)     <interface type='network'>
	I0806 00:14:53.600639   62278 main.go:134] libmachine: (stopped-upgrade-936666)       <source network='default'/>
	I0806 00:14:53.600644   62278 main.go:134] libmachine: (stopped-upgrade-936666)       <model type='virtio'/>
	I0806 00:14:53.600648   62278 main.go:134] libmachine: (stopped-upgrade-936666)     </interface>
	I0806 00:14:53.600653   62278 main.go:134] libmachine: (stopped-upgrade-936666)     <serial type='pty'>
	I0806 00:14:53.600658   62278 main.go:134] libmachine: (stopped-upgrade-936666)       <target port='0'/>
	I0806 00:14:53.600666   62278 main.go:134] libmachine: (stopped-upgrade-936666)     </serial>
	I0806 00:14:53.600671   62278 main.go:134] libmachine: (stopped-upgrade-936666)     <console type='pty'>
	I0806 00:14:53.600676   62278 main.go:134] libmachine: (stopped-upgrade-936666)       <target type='serial' port='0'/>
	I0806 00:14:53.600681   62278 main.go:134] libmachine: (stopped-upgrade-936666)     </console>
	I0806 00:14:53.600685   62278 main.go:134] libmachine: (stopped-upgrade-936666)     <rng model='virtio'>
	I0806 00:14:53.600691   62278 main.go:134] libmachine: (stopped-upgrade-936666)       <backend model='random'>/dev/random</backend>
	I0806 00:14:53.600697   62278 main.go:134] libmachine: (stopped-upgrade-936666)     </rng>
	I0806 00:14:53.600702   62278 main.go:134] libmachine: (stopped-upgrade-936666)     
	I0806 00:14:53.600706   62278 main.go:134] libmachine: (stopped-upgrade-936666)     
	I0806 00:14:53.600711   62278 main.go:134] libmachine: (stopped-upgrade-936666)   </devices>
	I0806 00:14:53.600716   62278 main.go:134] libmachine: (stopped-upgrade-936666) </domain>
	I0806 00:14:53.600724   62278 main.go:134] libmachine: (stopped-upgrade-936666) 
	I0806 00:14:53.671013   62278 main.go:134] libmachine: (stopped-upgrade-936666) DBG | domain stopped-upgrade-936666 has defined MAC address 52:54:00:aa:80:22 in network default
	I0806 00:14:53.671732   62278 main.go:134] libmachine: (stopped-upgrade-936666) DBG | domain stopped-upgrade-936666 has defined MAC address 52:54:00:7b:85:58 in network mk-stopped-upgrade-936666
	I0806 00:14:53.671808   62278 main.go:134] libmachine: (stopped-upgrade-936666) Ensuring networks are active...
	I0806 00:14:53.672684   62278 main.go:134] libmachine: (stopped-upgrade-936666) Ensuring network default is active
	I0806 00:14:53.673005   62278 main.go:134] libmachine: (stopped-upgrade-936666) Ensuring network mk-stopped-upgrade-936666 is active
	I0806 00:14:53.673659   62278 main.go:134] libmachine: (stopped-upgrade-936666) Getting domain xml...
	I0806 00:14:53.674706   62278 main.go:134] libmachine: (stopped-upgrade-936666) Creating domain...
	I0806 00:14:55.890304   62278 main.go:134] libmachine: (stopped-upgrade-936666) Waiting to get IP...
	I0806 00:14:55.891201   62278 main.go:134] libmachine: (stopped-upgrade-936666) DBG | domain stopped-upgrade-936666 has defined MAC address 52:54:00:7b:85:58 in network mk-stopped-upgrade-936666
	I0806 00:14:55.891630   62278 main.go:134] libmachine: (stopped-upgrade-936666) DBG | unable to find current IP address of domain stopped-upgrade-936666 in network mk-stopped-upgrade-936666
	I0806 00:14:55.891656   62278 main.go:134] libmachine: (stopped-upgrade-936666) DBG | I0806 00:14:55.891621   62316 retry.go:31] will retry after 294.68666ms: waiting for machine to come up
	I0806 00:14:56.188405   62278 main.go:134] libmachine: (stopped-upgrade-936666) DBG | domain stopped-upgrade-936666 has defined MAC address 52:54:00:7b:85:58 in network mk-stopped-upgrade-936666
	I0806 00:14:56.188887   62278 main.go:134] libmachine: (stopped-upgrade-936666) DBG | unable to find current IP address of domain stopped-upgrade-936666 in network mk-stopped-upgrade-936666
	I0806 00:14:56.188907   62278 main.go:134] libmachine: (stopped-upgrade-936666) DBG | I0806 00:14:56.188835   62316 retry.go:31] will retry after 311.048191ms: waiting for machine to come up
	I0806 00:14:56.501266   62278 main.go:134] libmachine: (stopped-upgrade-936666) DBG | domain stopped-upgrade-936666 has defined MAC address 52:54:00:7b:85:58 in network mk-stopped-upgrade-936666
	I0806 00:14:56.501793   62278 main.go:134] libmachine: (stopped-upgrade-936666) DBG | unable to find current IP address of domain stopped-upgrade-936666 in network mk-stopped-upgrade-936666
	I0806 00:14:56.501910   62278 main.go:134] libmachine: (stopped-upgrade-936666) DBG | I0806 00:14:56.501852   62316 retry.go:31] will retry after 347.169902ms: waiting for machine to come up
	I0806 00:14:55.720363   62044 main.go:141] libmachine: (pause-161508) Calling .GetIP
	I0806 00:14:55.723606   62044 main.go:141] libmachine: (pause-161508) DBG | domain pause-161508 has defined MAC address 52:54:00:33:8e:29 in network mk-pause-161508
	I0806 00:14:55.723915   62044 main.go:141] libmachine: (pause-161508) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:33:8e:29", ip: ""} in network mk-pause-161508: {Iface:virbr3 ExpiryTime:2024-08-06 01:13:19 +0000 UTC Type:0 Mac:52:54:00:33:8e:29 Iaid: IPaddr:192.168.39.118 Prefix:24 Hostname:pause-161508 Clientid:01:52:54:00:33:8e:29}
	I0806 00:14:55.723945   62044 main.go:141] libmachine: (pause-161508) DBG | domain pause-161508 has defined IP address 192.168.39.118 and MAC address 52:54:00:33:8e:29 in network mk-pause-161508
	I0806 00:14:55.724210   62044 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0806 00:14:55.730905   62044 kubeadm.go:883] updating cluster {Name:pause-161508 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19339/minikube-v1.33.1-1722248113-19339-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2048 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.3
ClusterName:pause-161508 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.118 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:fals
e olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0806 00:14:55.731109   62044 preload.go:131] Checking if preload exists for k8s version v1.30.3 and runtime crio
	I0806 00:14:55.731169   62044 ssh_runner.go:195] Run: sudo crictl images --output json
	I0806 00:14:55.795823   62044 crio.go:514] all images are preloaded for cri-o runtime.
	I0806 00:14:55.795857   62044 crio.go:433] Images already preloaded, skipping extraction
	I0806 00:14:55.795919   62044 ssh_runner.go:195] Run: sudo crictl images --output json
	I0806 00:14:55.836123   62044 crio.go:514] all images are preloaded for cri-o runtime.
	I0806 00:14:55.836147   62044 cache_images.go:84] Images are preloaded, skipping loading
	I0806 00:14:55.836157   62044 kubeadm.go:934] updating node { 192.168.39.118 8443 v1.30.3 crio true true} ...
	I0806 00:14:55.836287   62044 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.30.3/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=pause-161508 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.118
	
	[Install]
	 config:
	{KubernetesVersion:v1.30.3 ClusterName:pause-161508 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0806 00:14:55.836381   62044 ssh_runner.go:195] Run: crio config
	I0806 00:14:55.889286   62044 cni.go:84] Creating CNI manager for ""
	I0806 00:14:55.889312   62044 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0806 00:14:55.889323   62044 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0806 00:14:55.889351   62044 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.118 APIServerPort:8443 KubernetesVersion:v1.30.3 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:pause-161508 NodeName:pause-161508 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.118"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.118 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kub
ernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0806 00:14:55.889555   62044 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.118
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "pause-161508"
	  kubeletExtraArgs:
	    node-ip: 192.168.39.118
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.118"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.30.3
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0806 00:14:55.889623   62044 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.30.3
	I0806 00:14:55.932046   62044 binaries.go:44] Found k8s binaries, skipping transfer
	I0806 00:14:55.932135   62044 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0806 00:14:55.950419   62044 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (312 bytes)
	I0806 00:14:56.036763   62044 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0806 00:14:56.159694   62044 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2156 bytes)
	I0806 00:14:56.254787   62044 ssh_runner.go:195] Run: grep 192.168.39.118	control-plane.minikube.internal$ /etc/hosts
	I0806 00:14:56.288571   62044 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0806 00:14:56.575805   62044 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0806 00:14:56.754768   62044 certs.go:68] Setting up /home/jenkins/minikube-integration/19373-9606/.minikube/profiles/pause-161508 for IP: 192.168.39.118
	I0806 00:14:56.754791   62044 certs.go:194] generating shared ca certs ...
	I0806 00:14:56.754810   62044 certs.go:226] acquiring lock for ca certs: {Name:mkf35a042c1656d191f542eee7fa087aad4d29d8 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0806 00:14:56.755074   62044 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19373-9606/.minikube/ca.key
	I0806 00:14:56.755141   62044 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19373-9606/.minikube/proxy-client-ca.key
	I0806 00:14:56.755154   62044 certs.go:256] generating profile certs ...
	I0806 00:14:56.755260   62044 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19373-9606/.minikube/profiles/pause-161508/client.key
	I0806 00:14:56.755339   62044 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/19373-9606/.minikube/profiles/pause-161508/apiserver.key.423b175f
	I0806 00:14:56.755386   62044 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19373-9606/.minikube/profiles/pause-161508/proxy-client.key
	I0806 00:14:56.755522   62044 certs.go:484] found cert: /home/jenkins/minikube-integration/19373-9606/.minikube/certs/16792.pem (1338 bytes)
	W0806 00:14:56.755559   62044 certs.go:480] ignoring /home/jenkins/minikube-integration/19373-9606/.minikube/certs/16792_empty.pem, impossibly tiny 0 bytes
	I0806 00:14:56.755570   62044 certs.go:484] found cert: /home/jenkins/minikube-integration/19373-9606/.minikube/certs/ca-key.pem (1679 bytes)
	I0806 00:14:56.755607   62044 certs.go:484] found cert: /home/jenkins/minikube-integration/19373-9606/.minikube/certs/ca.pem (1082 bytes)
	I0806 00:14:56.755656   62044 certs.go:484] found cert: /home/jenkins/minikube-integration/19373-9606/.minikube/certs/cert.pem (1123 bytes)
	I0806 00:14:56.755693   62044 certs.go:484] found cert: /home/jenkins/minikube-integration/19373-9606/.minikube/certs/key.pem (1679 bytes)
	I0806 00:14:56.755748   62044 certs.go:484] found cert: /home/jenkins/minikube-integration/19373-9606/.minikube/files/etc/ssl/certs/167922.pem (1708 bytes)
	I0806 00:14:56.756618   62044 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19373-9606/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0806 00:14:52.132666   61720 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.575793522s)
	I0806 00:14:52.132708   61720 crio.go:469] duration metric: took 2.575934958s to extract the tarball
	I0806 00:14:52.132718   61720 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0806 00:14:52.178655   61720 ssh_runner.go:195] Run: sudo crictl images --output json
	I0806 00:14:52.228379   61720 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.20.0". assuming images are not preloaded.
	I0806 00:14:52.228410   61720 cache_images.go:88] LoadCachedImages start: [registry.k8s.io/kube-apiserver:v1.20.0 registry.k8s.io/kube-controller-manager:v1.20.0 registry.k8s.io/kube-scheduler:v1.20.0 registry.k8s.io/kube-proxy:v1.20.0 registry.k8s.io/pause:3.2 registry.k8s.io/etcd:3.4.13-0 registry.k8s.io/coredns:1.7.0 gcr.io/k8s-minikube/storage-provisioner:v5]
	I0806 00:14:52.228492   61720 image.go:134] retrieving image: registry.k8s.io/kube-proxy:v1.20.0
	I0806 00:14:52.228495   61720 image.go:134] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0806 00:14:52.228503   61720 image.go:134] retrieving image: registry.k8s.io/etcd:3.4.13-0
	I0806 00:14:52.228568   61720 image.go:134] retrieving image: registry.k8s.io/coredns:1.7.0
	I0806 00:14:52.228594   61720 image.go:134] retrieving image: registry.k8s.io/kube-scheduler:v1.20.0
	I0806 00:14:52.228592   61720 image.go:134] retrieving image: registry.k8s.io/pause:3.2
	I0806 00:14:52.228636   61720 image.go:134] retrieving image: registry.k8s.io/kube-apiserver:v1.20.0
	I0806 00:14:52.228641   61720 image.go:134] retrieving image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0806 00:14:52.229894   61720 image.go:177] daemon lookup for registry.k8s.io/kube-apiserver:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.20.0
	I0806 00:14:52.229923   61720 image.go:177] daemon lookup for registry.k8s.io/etcd:3.4.13-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.4.13-0
	I0806 00:14:52.229926   61720 image.go:177] daemon lookup for registry.k8s.io/pause:3.2: Error response from daemon: No such image: registry.k8s.io/pause:3.2
	I0806 00:14:52.229893   61720 image.go:177] daemon lookup for registry.k8s.io/coredns:1.7.0: Error response from daemon: No such image: registry.k8s.io/coredns:1.7.0
	I0806 00:14:52.229939   61720 image.go:177] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0806 00:14:52.229949   61720 image.go:177] daemon lookup for registry.k8s.io/kube-scheduler:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.20.0
	I0806 00:14:52.229901   61720 image.go:177] daemon lookup for registry.k8s.io/kube-proxy:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.20.0
	I0806 00:14:52.229956   61720 image.go:177] daemon lookup for registry.k8s.io/kube-controller-manager:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0806 00:14:52.369603   61720 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/etcd:3.4.13-0
	I0806 00:14:52.373690   61720 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/pause:3.2
	I0806 00:14:52.419211   61720 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.20.0
	I0806 00:14:52.421008   61720 cache_images.go:116] "registry.k8s.io/etcd:3.4.13-0" needs transfer: "registry.k8s.io/etcd:3.4.13-0" does not exist at hash "0369cf4303ffdb467dc219990960a9baa8512a54b0ad9283eaf55bd6c0adb934" in container runtime
	I0806 00:14:52.421051   61720 cri.go:218] Removing image: registry.k8s.io/etcd:3.4.13-0
	I0806 00:14:52.421110   61720 ssh_runner.go:195] Run: which crictl
	I0806 00:14:52.437302   61720 cache_images.go:116] "registry.k8s.io/pause:3.2" needs transfer: "registry.k8s.io/pause:3.2" does not exist at hash "80d28bedfe5dec59da9ebf8e6260224ac9008ab5c11dbbe16ee3ba3e4439ac2c" in container runtime
	I0806 00:14:52.437345   61720 cri.go:218] Removing image: registry.k8s.io/pause:3.2
	I0806 00:14:52.437393   61720 ssh_runner.go:195] Run: which crictl
	I0806 00:14:52.449950   61720 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.20.0
	I0806 00:14:52.469829   61720 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.4.13-0
	I0806 00:14:52.469876   61720 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.2
	I0806 00:14:52.470021   61720 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.20.0" needs transfer: "registry.k8s.io/kube-controller-manager:v1.20.0" does not exist at hash "b9fa1895dcaa6d3dd241d6d9340e939ca30fc0946464ec9f205a8cbe738a8080" in container runtime
	I0806 00:14:52.470060   61720 cri.go:218] Removing image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0806 00:14:52.470095   61720 ssh_runner.go:195] Run: which crictl
	I0806 00:14:52.544769   61720 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19373-9606/.minikube/cache/images/amd64/registry.k8s.io/pause_3.2
	I0806 00:14:52.545022   61720 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.20.0" needs transfer: "registry.k8s.io/kube-proxy:v1.20.0" does not exist at hash "10cc881966cfd9287656c2fce1f144625602653d1e8b011487a7a71feb100bdc" in container runtime
	I0806 00:14:52.545061   61720 cri.go:218] Removing image: registry.k8s.io/kube-proxy:v1.20.0
	I0806 00:14:52.545119   61720 ssh_runner.go:195] Run: which crictl
	I0806 00:14:52.557704   61720 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.20.0
	I0806 00:14:52.557713   61720 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.20.0
	I0806 00:14:52.557759   61720 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19373-9606/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.4.13-0
	I0806 00:14:52.575840   61720 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.20.0
	I0806 00:14:52.598803   61720 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.20.0
	I0806 00:14:52.633351   61720 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19373-9606/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.20.0
	I0806 00:14:52.633411   61720 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19373-9606/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.20.0
	I0806 00:14:52.640820   61720 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/coredns:1.7.0
	I0806 00:14:52.664322   61720 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.20.0" needs transfer: "registry.k8s.io/kube-apiserver:v1.20.0" does not exist at hash "ca9843d3b545457f24b012d6d579ba85f132f2406aa171ad84d53caa55e5de99" in container runtime
	I0806 00:14:52.664375   61720 cri.go:218] Removing image: registry.k8s.io/kube-apiserver:v1.20.0
	I0806 00:14:52.664433   61720 ssh_runner.go:195] Run: which crictl
	I0806 00:14:52.686664   61720 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.20.0" needs transfer: "registry.k8s.io/kube-scheduler:v1.20.0" does not exist at hash "3138b6e3d471224fd516f758f3b53309219bcb6824e07686b3cd60d78012c899" in container runtime
	I0806 00:14:52.686704   61720 cri.go:218] Removing image: registry.k8s.io/kube-scheduler:v1.20.0
	I0806 00:14:52.686751   61720 ssh_runner.go:195] Run: which crictl
	I0806 00:14:52.710267   61720 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.20.0
	I0806 00:14:52.710288   61720 cache_images.go:116] "registry.k8s.io/coredns:1.7.0" needs transfer: "registry.k8s.io/coredns:1.7.0" does not exist at hash "bfe3a36ebd2528b454be6aebece806db5b40407b833e2af9617bf39afaff8c16" in container runtime
	I0806 00:14:52.710295   61720 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.20.0
	I0806 00:14:52.710322   61720 cri.go:218] Removing image: registry.k8s.io/coredns:1.7.0
	I0806 00:14:52.710348   61720 ssh_runner.go:195] Run: which crictl
	I0806 00:14:52.753209   61720 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.7.0
	I0806 00:14:52.773045   61720 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19373-9606/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.20.0
	I0806 00:14:52.773045   61720 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19373-9606/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.20.0
	I0806 00:14:52.794936   61720 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19373-9606/.minikube/cache/images/amd64/registry.k8s.io/coredns_1.7.0
	I0806 00:14:53.168413   61720 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I0806 00:14:53.311791   61720 cache_images.go:92] duration metric: took 1.083360411s to LoadCachedImages
	W0806 00:14:53.311894   61720 out.go:239] X Unable to load cached images: LoadCachedImages: stat /home/jenkins/minikube-integration/19373-9606/.minikube/cache/images/amd64/registry.k8s.io/pause_3.2: no such file or directory
	I0806 00:14:53.311912   61720 kubeadm.go:934] updating node { 192.168.72.112 8443 v1.20.0 crio true true} ...
	I0806 00:14:53.312034   61720 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.20.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime=remote --container-runtime-endpoint=unix:///var/run/crio/crio.sock --hostname-override=kubernetes-upgrade-907863 --kubeconfig=/etc/kubernetes/kubelet.conf --network-plugin=cni --node-ip=192.168.72.112
	
	[Install]
	 config:
	{KubernetesVersion:v1.20.0 ClusterName:kubernetes-upgrade-907863 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0806 00:14:53.312108   61720 ssh_runner.go:195] Run: crio config
	I0806 00:14:53.380642   61720 cni.go:84] Creating CNI manager for ""
	I0806 00:14:53.380662   61720 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0806 00:14:53.380674   61720 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0806 00:14:53.380698   61720 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.72.112 APIServerPort:8443 KubernetesVersion:v1.20.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:kubernetes-upgrade-907863 NodeName:kubernetes-upgrade-907863 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.72.112"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.72.112 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.
crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:false}
	I0806 00:14:53.380923   61720 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.72.112
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: /var/run/crio/crio.sock
	  name: "kubernetes-upgrade-907863"
	  kubeletExtraArgs:
	    node-ip: 192.168.72.112
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.72.112"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	dns:
	  type: CoreDNS
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.20.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0806 00:14:53.380997   61720 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.20.0
	I0806 00:14:53.395339   61720 binaries.go:44] Found k8s binaries, skipping transfer
	I0806 00:14:53.395423   61720 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0806 00:14:53.411555   61720 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (433 bytes)
	I0806 00:14:53.433132   61720 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0806 00:14:53.455825   61720 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2126 bytes)
	I0806 00:14:53.476294   61720 ssh_runner.go:195] Run: grep 192.168.72.112	control-plane.minikube.internal$ /etc/hosts
	I0806 00:14:53.480668   61720 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.72.112	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0806 00:14:53.499600   61720 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0806 00:14:53.652974   61720 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0806 00:14:53.677860   61720 certs.go:68] Setting up /home/jenkins/minikube-integration/19373-9606/.minikube/profiles/kubernetes-upgrade-907863 for IP: 192.168.72.112
	I0806 00:14:53.677891   61720 certs.go:194] generating shared ca certs ...
	I0806 00:14:53.677911   61720 certs.go:226] acquiring lock for ca certs: {Name:mkf35a042c1656d191f542eee7fa087aad4d29d8 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0806 00:14:53.678068   61720 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19373-9606/.minikube/ca.key
	I0806 00:14:53.678134   61720 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19373-9606/.minikube/proxy-client-ca.key
	I0806 00:14:53.678149   61720 certs.go:256] generating profile certs ...
	I0806 00:14:53.678226   61720 certs.go:363] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/19373-9606/.minikube/profiles/kubernetes-upgrade-907863/client.key
	I0806 00:14:53.678247   61720 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19373-9606/.minikube/profiles/kubernetes-upgrade-907863/client.crt with IP's: []
	I0806 00:14:53.891591   61720 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19373-9606/.minikube/profiles/kubernetes-upgrade-907863/client.crt ...
	I0806 00:14:53.891629   61720 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19373-9606/.minikube/profiles/kubernetes-upgrade-907863/client.crt: {Name:mka73080179836a3e5f00f6563ab46864f07d0b2 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0806 00:14:53.891808   61720 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19373-9606/.minikube/profiles/kubernetes-upgrade-907863/client.key ...
	I0806 00:14:53.891824   61720 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19373-9606/.minikube/profiles/kubernetes-upgrade-907863/client.key: {Name:mka33cfcfc39b86c3df16be006a98c42ce1b23f2 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0806 00:14:53.891911   61720 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/19373-9606/.minikube/profiles/kubernetes-upgrade-907863/apiserver.key.777d71ca
	I0806 00:14:53.891933   61720 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19373-9606/.minikube/profiles/kubernetes-upgrade-907863/apiserver.crt.777d71ca with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.72.112]
	I0806 00:14:54.037095   61720 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19373-9606/.minikube/profiles/kubernetes-upgrade-907863/apiserver.crt.777d71ca ...
	I0806 00:14:54.037146   61720 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19373-9606/.minikube/profiles/kubernetes-upgrade-907863/apiserver.crt.777d71ca: {Name:mkdbd1ad9bf1e099ce927cbbd16ee9537c57abec Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0806 00:14:54.037338   61720 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19373-9606/.minikube/profiles/kubernetes-upgrade-907863/apiserver.key.777d71ca ...
	I0806 00:14:54.037353   61720 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19373-9606/.minikube/profiles/kubernetes-upgrade-907863/apiserver.key.777d71ca: {Name:mke232de9779080cad9e9caed41be9d6d22833d6 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0806 00:14:54.037428   61720 certs.go:381] copying /home/jenkins/minikube-integration/19373-9606/.minikube/profiles/kubernetes-upgrade-907863/apiserver.crt.777d71ca -> /home/jenkins/minikube-integration/19373-9606/.minikube/profiles/kubernetes-upgrade-907863/apiserver.crt
	I0806 00:14:54.037527   61720 certs.go:385] copying /home/jenkins/minikube-integration/19373-9606/.minikube/profiles/kubernetes-upgrade-907863/apiserver.key.777d71ca -> /home/jenkins/minikube-integration/19373-9606/.minikube/profiles/kubernetes-upgrade-907863/apiserver.key
	I0806 00:14:54.037593   61720 certs.go:363] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/19373-9606/.minikube/profiles/kubernetes-upgrade-907863/proxy-client.key
	I0806 00:14:54.037611   61720 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19373-9606/.minikube/profiles/kubernetes-upgrade-907863/proxy-client.crt with IP's: []
	I0806 00:14:54.104925   61720 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19373-9606/.minikube/profiles/kubernetes-upgrade-907863/proxy-client.crt ...
	I0806 00:14:54.104968   61720 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19373-9606/.minikube/profiles/kubernetes-upgrade-907863/proxy-client.crt: {Name:mk963f01277aaeaa47218702211ab49a2a05b2d7 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0806 00:14:54.158476   61720 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19373-9606/.minikube/profiles/kubernetes-upgrade-907863/proxy-client.key ...
	I0806 00:14:54.158516   61720 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19373-9606/.minikube/profiles/kubernetes-upgrade-907863/proxy-client.key: {Name:mk3b859fbef7364d8f865e5e69cf276e01b899be Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0806 00:14:54.158797   61720 certs.go:484] found cert: /home/jenkins/minikube-integration/19373-9606/.minikube/certs/16792.pem (1338 bytes)
	W0806 00:14:54.158850   61720 certs.go:480] ignoring /home/jenkins/minikube-integration/19373-9606/.minikube/certs/16792_empty.pem, impossibly tiny 0 bytes
	I0806 00:14:54.158864   61720 certs.go:484] found cert: /home/jenkins/minikube-integration/19373-9606/.minikube/certs/ca-key.pem (1679 bytes)
	I0806 00:14:54.158896   61720 certs.go:484] found cert: /home/jenkins/minikube-integration/19373-9606/.minikube/certs/ca.pem (1082 bytes)
	I0806 00:14:54.158952   61720 certs.go:484] found cert: /home/jenkins/minikube-integration/19373-9606/.minikube/certs/cert.pem (1123 bytes)
	I0806 00:14:54.158997   61720 certs.go:484] found cert: /home/jenkins/minikube-integration/19373-9606/.minikube/certs/key.pem (1679 bytes)
	I0806 00:14:54.159081   61720 certs.go:484] found cert: /home/jenkins/minikube-integration/19373-9606/.minikube/files/etc/ssl/certs/167922.pem (1708 bytes)
	I0806 00:14:54.159895   61720 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19373-9606/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0806 00:14:54.189387   61720 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19373-9606/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0806 00:14:54.217878   61720 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19373-9606/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0806 00:14:54.247509   61720 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19373-9606/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0806 00:14:54.276131   61720 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19373-9606/.minikube/profiles/kubernetes-upgrade-907863/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1436 bytes)
	I0806 00:14:54.306893   61720 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19373-9606/.minikube/profiles/kubernetes-upgrade-907863/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0806 00:14:54.334293   61720 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19373-9606/.minikube/profiles/kubernetes-upgrade-907863/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0806 00:14:54.362197   61720 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19373-9606/.minikube/profiles/kubernetes-upgrade-907863/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0806 00:14:54.392159   61720 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19373-9606/.minikube/files/etc/ssl/certs/167922.pem --> /usr/share/ca-certificates/167922.pem (1708 bytes)
	I0806 00:14:54.420680   61720 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19373-9606/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0806 00:14:54.456455   61720 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19373-9606/.minikube/certs/16792.pem --> /usr/share/ca-certificates/16792.pem (1338 bytes)
	I0806 00:14:54.486139   61720 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0806 00:14:54.504953   61720 ssh_runner.go:195] Run: openssl version
	I0806 00:14:54.511853   61720 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0806 00:14:54.524862   61720 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0806 00:14:54.529585   61720 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Aug  5 22:50 /usr/share/ca-certificates/minikubeCA.pem
	I0806 00:14:54.529642   61720 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0806 00:14:54.535690   61720 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0806 00:14:54.550055   61720 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/16792.pem && ln -fs /usr/share/ca-certificates/16792.pem /etc/ssl/certs/16792.pem"
	I0806 00:14:54.572183   61720 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/16792.pem
	I0806 00:14:54.582553   61720 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Aug  5 23:03 /usr/share/ca-certificates/16792.pem
	I0806 00:14:54.582619   61720 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/16792.pem
	I0806 00:14:54.590967   61720 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/16792.pem /etc/ssl/certs/51391683.0"
	I0806 00:14:54.615850   61720 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/167922.pem && ln -fs /usr/share/ca-certificates/167922.pem /etc/ssl/certs/167922.pem"
	I0806 00:14:54.636139   61720 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/167922.pem
	I0806 00:14:54.641803   61720 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Aug  5 23:03 /usr/share/ca-certificates/167922.pem
	I0806 00:14:54.641870   61720 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/167922.pem
	I0806 00:14:54.648958   61720 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/167922.pem /etc/ssl/certs/3ec20f2e.0"
	I0806 00:14:54.664107   61720 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0806 00:14:54.669336   61720 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0806 00:14:54.669395   61720 kubeadm.go:392] StartCluster: {Name:kubernetes-upgrade-907863 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19339/minikube-v1.33.1-1722248113-19339-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersi
on:v1.20.0 ClusterName:kubernetes-upgrade-907863 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.72.112 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizat
ions:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0806 00:14:54.669544   61720 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0806 00:14:54.669609   61720 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0806 00:14:54.719934   61720 cri.go:89] found id: ""
	I0806 00:14:54.719996   61720 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0806 00:14:54.732226   61720 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0806 00:14:54.743958   61720 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0806 00:14:54.754033   61720 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0806 00:14:54.754060   61720 kubeadm.go:157] found existing configuration files:
	
	I0806 00:14:54.754116   61720 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0806 00:14:54.763793   61720 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0806 00:14:54.763871   61720 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0806 00:14:54.774255   61720 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0806 00:14:54.784427   61720 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0806 00:14:54.784499   61720 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0806 00:14:54.796822   61720 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0806 00:14:54.807691   61720 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0806 00:14:54.807751   61720 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0806 00:14:54.818222   61720 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0806 00:14:54.830068   61720 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0806 00:14:54.830140   61720 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0806 00:14:54.841016   61720 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0806 00:14:55.148580   61720 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0806 00:14:56.850785   62278 main.go:134] libmachine: (stopped-upgrade-936666) DBG | domain stopped-upgrade-936666 has defined MAC address 52:54:00:7b:85:58 in network mk-stopped-upgrade-936666
	I0806 00:14:56.851346   62278 main.go:134] libmachine: (stopped-upgrade-936666) DBG | unable to find current IP address of domain stopped-upgrade-936666 in network mk-stopped-upgrade-936666
	I0806 00:14:56.851372   62278 main.go:134] libmachine: (stopped-upgrade-936666) DBG | I0806 00:14:56.851297   62316 retry.go:31] will retry after 460.233406ms: waiting for machine to come up
	I0806 00:14:57.312810   62278 main.go:134] libmachine: (stopped-upgrade-936666) DBG | domain stopped-upgrade-936666 has defined MAC address 52:54:00:7b:85:58 in network mk-stopped-upgrade-936666
	I0806 00:14:57.313343   62278 main.go:134] libmachine: (stopped-upgrade-936666) DBG | unable to find current IP address of domain stopped-upgrade-936666 in network mk-stopped-upgrade-936666
	I0806 00:14:57.313368   62278 main.go:134] libmachine: (stopped-upgrade-936666) DBG | I0806 00:14:57.313274   62316 retry.go:31] will retry after 673.92191ms: waiting for machine to come up
	I0806 00:14:57.988696   62278 main.go:134] libmachine: (stopped-upgrade-936666) DBG | domain stopped-upgrade-936666 has defined MAC address 52:54:00:7b:85:58 in network mk-stopped-upgrade-936666
	I0806 00:14:57.989268   62278 main.go:134] libmachine: (stopped-upgrade-936666) DBG | unable to find current IP address of domain stopped-upgrade-936666 in network mk-stopped-upgrade-936666
	I0806 00:14:57.989294   62278 main.go:134] libmachine: (stopped-upgrade-936666) DBG | I0806 00:14:57.989206   62316 retry.go:31] will retry after 742.239606ms: waiting for machine to come up
	I0806 00:14:58.733669   62278 main.go:134] libmachine: (stopped-upgrade-936666) DBG | domain stopped-upgrade-936666 has defined MAC address 52:54:00:7b:85:58 in network mk-stopped-upgrade-936666
	I0806 00:14:58.734242   62278 main.go:134] libmachine: (stopped-upgrade-936666) DBG | unable to find current IP address of domain stopped-upgrade-936666 in network mk-stopped-upgrade-936666
	I0806 00:14:58.734267   62278 main.go:134] libmachine: (stopped-upgrade-936666) DBG | I0806 00:14:58.734186   62316 retry.go:31] will retry after 1.085265631s: waiting for machine to come up
	I0806 00:14:59.821563   62278 main.go:134] libmachine: (stopped-upgrade-936666) DBG | domain stopped-upgrade-936666 has defined MAC address 52:54:00:7b:85:58 in network mk-stopped-upgrade-936666
	I0806 00:14:59.822095   62278 main.go:134] libmachine: (stopped-upgrade-936666) DBG | unable to find current IP address of domain stopped-upgrade-936666 in network mk-stopped-upgrade-936666
	I0806 00:14:59.822120   62278 main.go:134] libmachine: (stopped-upgrade-936666) DBG | I0806 00:14:59.822056   62316 retry.go:31] will retry after 1.312616827s: waiting for machine to come up
	I0806 00:15:01.136328   62278 main.go:134] libmachine: (stopped-upgrade-936666) DBG | domain stopped-upgrade-936666 has defined MAC address 52:54:00:7b:85:58 in network mk-stopped-upgrade-936666
	I0806 00:15:01.136818   62278 main.go:134] libmachine: (stopped-upgrade-936666) DBG | unable to find current IP address of domain stopped-upgrade-936666 in network mk-stopped-upgrade-936666
	I0806 00:15:01.136861   62278 main.go:134] libmachine: (stopped-upgrade-936666) DBG | I0806 00:15:01.136762   62316 retry.go:31] will retry after 1.457249872s: waiting for machine to come up
	I0806 00:14:56.879774   62044 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19373-9606/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0806 00:14:56.952696   62044 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19373-9606/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0806 00:14:57.016032   62044 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19373-9606/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0806 00:14:57.114988   62044 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19373-9606/.minikube/profiles/pause-161508/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1419 bytes)
	I0806 00:14:57.176152   62044 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19373-9606/.minikube/profiles/pause-161508/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0806 00:14:57.210978   62044 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19373-9606/.minikube/profiles/pause-161508/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0806 00:14:57.252853   62044 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19373-9606/.minikube/profiles/pause-161508/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0806 00:14:57.316820   62044 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19373-9606/.minikube/certs/16792.pem --> /usr/share/ca-certificates/16792.pem (1338 bytes)
	I0806 00:14:57.363002   62044 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19373-9606/.minikube/files/etc/ssl/certs/167922.pem --> /usr/share/ca-certificates/167922.pem (1708 bytes)
	I0806 00:14:57.399698   62044 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19373-9606/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0806 00:14:57.432814   62044 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0806 00:14:57.453941   62044 ssh_runner.go:195] Run: openssl version
	I0806 00:14:57.464156   62044 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/16792.pem && ln -fs /usr/share/ca-certificates/16792.pem /etc/ssl/certs/16792.pem"
	I0806 00:14:57.489040   62044 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/16792.pem
	I0806 00:14:57.494783   62044 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Aug  5 23:03 /usr/share/ca-certificates/16792.pem
	I0806 00:14:57.494877   62044 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/16792.pem
	I0806 00:14:57.504614   62044 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/16792.pem /etc/ssl/certs/51391683.0"
	I0806 00:14:57.517885   62044 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/167922.pem && ln -fs /usr/share/ca-certificates/167922.pem /etc/ssl/certs/167922.pem"
	I0806 00:14:57.532455   62044 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/167922.pem
	I0806 00:14:57.538611   62044 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Aug  5 23:03 /usr/share/ca-certificates/167922.pem
	I0806 00:14:57.538681   62044 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/167922.pem
	I0806 00:14:57.548094   62044 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/167922.pem /etc/ssl/certs/3ec20f2e.0"
	I0806 00:14:57.563012   62044 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0806 00:14:57.580706   62044 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0806 00:14:57.587499   62044 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Aug  5 22:50 /usr/share/ca-certificates/minikubeCA.pem
	I0806 00:14:57.587569   62044 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0806 00:14:57.600755   62044 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0806 00:14:57.617274   62044 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0806 00:14:57.625073   62044 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0806 00:14:57.633761   62044 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0806 00:14:57.642962   62044 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0806 00:14:57.651893   62044 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0806 00:14:57.660085   62044 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0806 00:14:57.675488   62044 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0806 00:14:57.683789   62044 kubeadm.go:392] StartCluster: {Name:pause-161508 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19339/minikube-v1.33.1-1722248113-19339-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2048 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.3 Cl
usterName:pause-161508 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.118 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false o
lm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0806 00:14:57.683936   62044 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0806 00:14:57.684025   62044 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0806 00:14:57.779721   62044 cri.go:89] found id: "7d8cf53ea71f671cd11c77d76585125000808e1e5e9dbdf057515fae3694c8c2"
	I0806 00:14:57.779749   62044 cri.go:89] found id: "6c3e3869967dcdea9538e99cfba9fa7cbeab8604b70330171ff36214ad65dc4f"
	I0806 00:14:57.779757   62044 cri.go:89] found id: "b5f13fe4c6e99948bd3db06aa7e20e2aa8073f836fe73e27f62926299efa70db"
	I0806 00:14:57.779765   62044 cri.go:89] found id: "1bf2df2d254dca2dd27d3eae24da873f45a9ff1fbdfc0ea1dd1a35201bcd069a"
	I0806 00:14:57.779771   62044 cri.go:89] found id: "e7bde654f01ecd95054cba7e1831b15349cfc28b44f4f1a6722bec18d022099a"
	I0806 00:14:57.779776   62044 cri.go:89] found id: "6471bcdcb4ee5e45f9f8c1500088cb267ab957b707b6c9091e097c704b2d66d6"
	I0806 00:14:57.779780   62044 cri.go:89] found id: "bfaba2e9c5b00ff3bf65111355285eff0b912f5fc7bfb869f50fb2fffad3292c"
	I0806 00:14:57.779785   62044 cri.go:89] found id: "97903d796b6207952efa4d432caf2c3e60811379a89eae5fb77e2fa8c1a1d028"
	I0806 00:14:57.779790   62044 cri.go:89] found id: "895560f466b423fe1dfc2c8b3564008271d04a68b72ddc661ae492d8d6fe1900"
	I0806 00:14:57.779799   62044 cri.go:89] found id: "675d1cd5f51ab58fac223676eede1d4e46868c8e294ae5a521cd08300f62038b"
	I0806 00:14:57.779804   62044 cri.go:89] found id: ""
	I0806 00:14:57.779859   62044 ssh_runner.go:195] Run: sudo runc list -f json
	
	
	==> CRI-O <==
	Aug 06 00:15:34 pause-161508 crio[2471]: time="2024-08-06 00:15:34.777039536Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1722903334777005267,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:124365,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=f95923b2-a475-456c-837e-44e8dd7b26fc name=/runtime.v1.ImageService/ImageFsInfo
	Aug 06 00:15:34 pause-161508 crio[2471]: time="2024-08-06 00:15:34.777826469Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=b8320566-ed0e-4fad-ac35-3a121c438d41 name=/runtime.v1.RuntimeService/ListContainers
	Aug 06 00:15:34 pause-161508 crio[2471]: time="2024-08-06 00:15:34.777899008Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=b8320566-ed0e-4fad-ac35-3a121c438d41 name=/runtime.v1.RuntimeService/ListContainers
	Aug 06 00:15:34 pause-161508 crio[2471]: time="2024-08-06 00:15:34.778234683Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:bb38fda641e398e7269c4fc98840654d4ef417ccc04c0dbf6c34580362b741dc,PodSandboxId:11fed89ca356a76abf9f5cf4a8cb9b1d34a89a2c434ff78a4f706070f378a78c,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:2,},Image:&ImageSpec{Image:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,State:CONTAINER_RUNNING,CreatedAt:1722903317683439547,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-55wbx,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d90e043a-0525-4e59-9712-70116590d766,},Annotations:map[string]string{io.kubernetes.container.hash: acb1bb23,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessageP
ath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4086fd17ccf0a3abca003e8a74c3e9407ee2b4f844d50f018f01889b004f2e72,PodSandboxId:777385c422e42d154fb7a8bb5b55b02aecb6d77ebfca355ae637275547f7ae8a,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1722903313913286196,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-pause-161508,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 515bfc503159b4bbe954e027b35cf1cb,},Annotations:map[string]string{io.kubernetes.container.hash: 574d5a6a,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.containe
r.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:982476c4266b39f507a2b02b008aa89568d49d4e23c11d16111623b19660630c,PodSandboxId:b9198d20e0c75cff4e61b5ff0ad932276cd4bd88de410bc9dbe4420f7e14b591,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,State:CONTAINER_RUNNING,CreatedAt:1722903313892238688,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-pause-161508,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e677cb0bf72cff2cfe647e5180a645c6,},Annotations:map[string]string{io.kubernetes.container.hash: 7337c8d9,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessa
gePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8549cef6ca6f2186a15e55ba9b40db7f6b2948b5ae1430b198aaf36324fe4d12,PodSandboxId:9c50be63bb0e17758fb1fc280928e9a5bdd051b8a4babb033e39846cb22d746b,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,State:CONTAINER_RUNNING,CreatedAt:1722903313862725842,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-pause-161508,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e853443f8265426dc355b3c076e12bba,},Annotations:map[string]string{io.kubernetes.container.hash: bcb4918f,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.termina
tionMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:bc6df04b5cb9b90f3374c1e2cd15ec1fb3a0df999fa901662eecfe2bb3d6ee58,PodSandboxId:8c802c9490a1a015c30e657e438b732d323d9ebadf946c75fe8583444defe9d3,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,State:CONTAINER_RUNNING,CreatedAt:1722903313860213377,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-pause-161508,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: aa0ba9a109192f9bf83e28dceb8ed1ab,},Annotations:map[string]string{io.kubernetes.container.hash: bf72a8bf,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy:
File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d7a29ddb2d7a6b8db6a21aa6442f10a220f961e45a0453bef7e140494e61f546,PodSandboxId:0a9567f716680b7eac2daf2c025fc1a51bb9618cc918b6ec21eedb02307b2a2b,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1722903297750733768,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-9wwqk,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 111220a5-a088-4652-a1a3-284f2d1b111b,},Annotations:map[string]string{io.kubernetes.container.hash: 227892e4,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"contain
erPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:adcecbbd6a938c51103d7edc01cd0855e22c469f90e20bf3e4a76fbd715a4744,PodSandboxId:11fed89ca356a76abf9f5cf4a8cb9b1d34a89a2c434ff78a4f706070f378a78c,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,State:CONTAINER_EXITED,CreatedAt:1722903296535177581,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-55wbx,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d90e043a-0525-4e59-9712-70116590d766,},Annotations:map[string]string{io.kubernetes.container.hash: acb1bb
23,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7d8cf53ea71f671cd11c77d76585125000808e1e5e9dbdf057515fae3694c8c2,PodSandboxId:b9198d20e0c75cff4e61b5ff0ad932276cd4bd88de410bc9dbe4420f7e14b591,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,State:CONTAINER_EXITED,CreatedAt:1722903296536992644,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-pause-161508,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e677cb0bf72cff2cfe647e5180a645c6,},Annotations:map[string]string{io.kubernetes.container.hash: 7337c8d9,io.kubernetes.co
ntainer.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6c3e3869967dcdea9538e99cfba9fa7cbeab8604b70330171ff36214ad65dc4f,PodSandboxId:777385c422e42d154fb7a8bb5b55b02aecb6d77ebfca355ae637275547f7ae8a,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_EXITED,CreatedAt:1722903296438781418,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-pause-161508,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 515bfc503159b4bbe954e027b35cf1cb,},Annotations:map[string]string{io.kubernetes.container.hash: 574d5a6a,io.kubernetes.container.restartCount: 1,io.kubernetes.container.t
erminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1bf2df2d254dca2dd27d3eae24da873f45a9ff1fbdfc0ea1dd1a35201bcd069a,PodSandboxId:9c50be63bb0e17758fb1fc280928e9a5bdd051b8a4babb033e39846cb22d746b,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,State:CONTAINER_EXITED,CreatedAt:1722903296303133201,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-pause-161508,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e853443f8265426dc355b3c076e12bba,},Annotations:map[string]string{io.kubernetes.container.hash: bcb4918f,io.kubernetes.container.restartCount: 1,io.kubernetes.con
tainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b5f13fe4c6e99948bd3db06aa7e20e2aa8073f836fe73e27f62926299efa70db,PodSandboxId:8c802c9490a1a015c30e657e438b732d323d9ebadf946c75fe8583444defe9d3,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,State:CONTAINER_EXITED,CreatedAt:1722903296335835381,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-pause-161508,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: aa0ba9a109192f9bf83e28dceb8ed1ab,},Annotations:map[string]string{io.kubernetes.container.hash: bf72a8bf,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationM
essagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e7bde654f01ecd95054cba7e1831b15349cfc28b44f4f1a6722bec18d022099a,PodSandboxId:cdbab9ce1e914d71878d039e4d5f1059541433a0180f911897309405ae8b389a,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1722903240464916282,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-9wwqk,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 111220a5-a088-4652-a1a3-284f2d1b111b,},Annotations:map[string]string{io.kubernetes.container.hash: 227892e4,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns
-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=b8320566-ed0e-4fad-ac35-3a121c438d41 name=/runtime.v1.RuntimeService/ListContainers
	Aug 06 00:15:34 pause-161508 crio[2471]: time="2024-08-06 00:15:34.823793465Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=4611bc39-0f7a-46bf-b546-5c9fa83d8b2f name=/runtime.v1.RuntimeService/Version
	Aug 06 00:15:34 pause-161508 crio[2471]: time="2024-08-06 00:15:34.823873290Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=4611bc39-0f7a-46bf-b546-5c9fa83d8b2f name=/runtime.v1.RuntimeService/Version
	Aug 06 00:15:34 pause-161508 crio[2471]: time="2024-08-06 00:15:34.825339487Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=aa725c98-4e19-4eaa-b468-cbad9cc0c811 name=/runtime.v1.ImageService/ImageFsInfo
	Aug 06 00:15:34 pause-161508 crio[2471]: time="2024-08-06 00:15:34.826002463Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1722903334825977739,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:124365,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=aa725c98-4e19-4eaa-b468-cbad9cc0c811 name=/runtime.v1.ImageService/ImageFsInfo
	Aug 06 00:15:34 pause-161508 crio[2471]: time="2024-08-06 00:15:34.826617499Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=3bec0b83-8c19-4026-90c4-0493b1cfa8b0 name=/runtime.v1.RuntimeService/ListContainers
	Aug 06 00:15:34 pause-161508 crio[2471]: time="2024-08-06 00:15:34.826671806Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=3bec0b83-8c19-4026-90c4-0493b1cfa8b0 name=/runtime.v1.RuntimeService/ListContainers
	Aug 06 00:15:34 pause-161508 crio[2471]: time="2024-08-06 00:15:34.826928816Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:bb38fda641e398e7269c4fc98840654d4ef417ccc04c0dbf6c34580362b741dc,PodSandboxId:11fed89ca356a76abf9f5cf4a8cb9b1d34a89a2c434ff78a4f706070f378a78c,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:2,},Image:&ImageSpec{Image:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,State:CONTAINER_RUNNING,CreatedAt:1722903317683439547,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-55wbx,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d90e043a-0525-4e59-9712-70116590d766,},Annotations:map[string]string{io.kubernetes.container.hash: acb1bb23,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessageP
ath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4086fd17ccf0a3abca003e8a74c3e9407ee2b4f844d50f018f01889b004f2e72,PodSandboxId:777385c422e42d154fb7a8bb5b55b02aecb6d77ebfca355ae637275547f7ae8a,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1722903313913286196,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-pause-161508,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 515bfc503159b4bbe954e027b35cf1cb,},Annotations:map[string]string{io.kubernetes.container.hash: 574d5a6a,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.containe
r.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:982476c4266b39f507a2b02b008aa89568d49d4e23c11d16111623b19660630c,PodSandboxId:b9198d20e0c75cff4e61b5ff0ad932276cd4bd88de410bc9dbe4420f7e14b591,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,State:CONTAINER_RUNNING,CreatedAt:1722903313892238688,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-pause-161508,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e677cb0bf72cff2cfe647e5180a645c6,},Annotations:map[string]string{io.kubernetes.container.hash: 7337c8d9,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessa
gePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8549cef6ca6f2186a15e55ba9b40db7f6b2948b5ae1430b198aaf36324fe4d12,PodSandboxId:9c50be63bb0e17758fb1fc280928e9a5bdd051b8a4babb033e39846cb22d746b,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,State:CONTAINER_RUNNING,CreatedAt:1722903313862725842,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-pause-161508,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e853443f8265426dc355b3c076e12bba,},Annotations:map[string]string{io.kubernetes.container.hash: bcb4918f,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.termina
tionMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:bc6df04b5cb9b90f3374c1e2cd15ec1fb3a0df999fa901662eecfe2bb3d6ee58,PodSandboxId:8c802c9490a1a015c30e657e438b732d323d9ebadf946c75fe8583444defe9d3,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,State:CONTAINER_RUNNING,CreatedAt:1722903313860213377,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-pause-161508,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: aa0ba9a109192f9bf83e28dceb8ed1ab,},Annotations:map[string]string{io.kubernetes.container.hash: bf72a8bf,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy:
File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d7a29ddb2d7a6b8db6a21aa6442f10a220f961e45a0453bef7e140494e61f546,PodSandboxId:0a9567f716680b7eac2daf2c025fc1a51bb9618cc918b6ec21eedb02307b2a2b,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1722903297750733768,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-9wwqk,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 111220a5-a088-4652-a1a3-284f2d1b111b,},Annotations:map[string]string{io.kubernetes.container.hash: 227892e4,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"contain
erPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:adcecbbd6a938c51103d7edc01cd0855e22c469f90e20bf3e4a76fbd715a4744,PodSandboxId:11fed89ca356a76abf9f5cf4a8cb9b1d34a89a2c434ff78a4f706070f378a78c,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,State:CONTAINER_EXITED,CreatedAt:1722903296535177581,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-55wbx,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d90e043a-0525-4e59-9712-70116590d766,},Annotations:map[string]string{io.kubernetes.container.hash: acb1bb
23,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7d8cf53ea71f671cd11c77d76585125000808e1e5e9dbdf057515fae3694c8c2,PodSandboxId:b9198d20e0c75cff4e61b5ff0ad932276cd4bd88de410bc9dbe4420f7e14b591,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,State:CONTAINER_EXITED,CreatedAt:1722903296536992644,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-pause-161508,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e677cb0bf72cff2cfe647e5180a645c6,},Annotations:map[string]string{io.kubernetes.container.hash: 7337c8d9,io.kubernetes.co
ntainer.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6c3e3869967dcdea9538e99cfba9fa7cbeab8604b70330171ff36214ad65dc4f,PodSandboxId:777385c422e42d154fb7a8bb5b55b02aecb6d77ebfca355ae637275547f7ae8a,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_EXITED,CreatedAt:1722903296438781418,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-pause-161508,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 515bfc503159b4bbe954e027b35cf1cb,},Annotations:map[string]string{io.kubernetes.container.hash: 574d5a6a,io.kubernetes.container.restartCount: 1,io.kubernetes.container.t
erminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1bf2df2d254dca2dd27d3eae24da873f45a9ff1fbdfc0ea1dd1a35201bcd069a,PodSandboxId:9c50be63bb0e17758fb1fc280928e9a5bdd051b8a4babb033e39846cb22d746b,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,State:CONTAINER_EXITED,CreatedAt:1722903296303133201,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-pause-161508,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e853443f8265426dc355b3c076e12bba,},Annotations:map[string]string{io.kubernetes.container.hash: bcb4918f,io.kubernetes.container.restartCount: 1,io.kubernetes.con
tainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b5f13fe4c6e99948bd3db06aa7e20e2aa8073f836fe73e27f62926299efa70db,PodSandboxId:8c802c9490a1a015c30e657e438b732d323d9ebadf946c75fe8583444defe9d3,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,State:CONTAINER_EXITED,CreatedAt:1722903296335835381,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-pause-161508,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: aa0ba9a109192f9bf83e28dceb8ed1ab,},Annotations:map[string]string{io.kubernetes.container.hash: bf72a8bf,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationM
essagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e7bde654f01ecd95054cba7e1831b15349cfc28b44f4f1a6722bec18d022099a,PodSandboxId:cdbab9ce1e914d71878d039e4d5f1059541433a0180f911897309405ae8b389a,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1722903240464916282,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-9wwqk,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 111220a5-a088-4652-a1a3-284f2d1b111b,},Annotations:map[string]string{io.kubernetes.container.hash: 227892e4,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns
-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=3bec0b83-8c19-4026-90c4-0493b1cfa8b0 name=/runtime.v1.RuntimeService/ListContainers
	Aug 06 00:15:34 pause-161508 crio[2471]: time="2024-08-06 00:15:34.871735790Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=6b271916-566a-4b53-98f4-d47461288033 name=/runtime.v1.RuntimeService/Version
	Aug 06 00:15:34 pause-161508 crio[2471]: time="2024-08-06 00:15:34.871812046Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=6b271916-566a-4b53-98f4-d47461288033 name=/runtime.v1.RuntimeService/Version
	Aug 06 00:15:34 pause-161508 crio[2471]: time="2024-08-06 00:15:34.872784129Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=3295a1db-2258-4ac1-ad06-5c9bfd2266e7 name=/runtime.v1.ImageService/ImageFsInfo
	Aug 06 00:15:34 pause-161508 crio[2471]: time="2024-08-06 00:15:34.873137564Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1722903334873116378,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:124365,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=3295a1db-2258-4ac1-ad06-5c9bfd2266e7 name=/runtime.v1.ImageService/ImageFsInfo
	Aug 06 00:15:34 pause-161508 crio[2471]: time="2024-08-06 00:15:34.873749967Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=03b7c4a5-4543-4159-8d54-09ae12a407ed name=/runtime.v1.RuntimeService/ListContainers
	Aug 06 00:15:34 pause-161508 crio[2471]: time="2024-08-06 00:15:34.873804255Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=03b7c4a5-4543-4159-8d54-09ae12a407ed name=/runtime.v1.RuntimeService/ListContainers
	Aug 06 00:15:34 pause-161508 crio[2471]: time="2024-08-06 00:15:34.874044689Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:bb38fda641e398e7269c4fc98840654d4ef417ccc04c0dbf6c34580362b741dc,PodSandboxId:11fed89ca356a76abf9f5cf4a8cb9b1d34a89a2c434ff78a4f706070f378a78c,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:2,},Image:&ImageSpec{Image:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,State:CONTAINER_RUNNING,CreatedAt:1722903317683439547,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-55wbx,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d90e043a-0525-4e59-9712-70116590d766,},Annotations:map[string]string{io.kubernetes.container.hash: acb1bb23,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessageP
ath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4086fd17ccf0a3abca003e8a74c3e9407ee2b4f844d50f018f01889b004f2e72,PodSandboxId:777385c422e42d154fb7a8bb5b55b02aecb6d77ebfca355ae637275547f7ae8a,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1722903313913286196,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-pause-161508,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 515bfc503159b4bbe954e027b35cf1cb,},Annotations:map[string]string{io.kubernetes.container.hash: 574d5a6a,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.containe
r.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:982476c4266b39f507a2b02b008aa89568d49d4e23c11d16111623b19660630c,PodSandboxId:b9198d20e0c75cff4e61b5ff0ad932276cd4bd88de410bc9dbe4420f7e14b591,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,State:CONTAINER_RUNNING,CreatedAt:1722903313892238688,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-pause-161508,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e677cb0bf72cff2cfe647e5180a645c6,},Annotations:map[string]string{io.kubernetes.container.hash: 7337c8d9,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessa
gePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8549cef6ca6f2186a15e55ba9b40db7f6b2948b5ae1430b198aaf36324fe4d12,PodSandboxId:9c50be63bb0e17758fb1fc280928e9a5bdd051b8a4babb033e39846cb22d746b,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,State:CONTAINER_RUNNING,CreatedAt:1722903313862725842,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-pause-161508,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e853443f8265426dc355b3c076e12bba,},Annotations:map[string]string{io.kubernetes.container.hash: bcb4918f,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.termina
tionMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:bc6df04b5cb9b90f3374c1e2cd15ec1fb3a0df999fa901662eecfe2bb3d6ee58,PodSandboxId:8c802c9490a1a015c30e657e438b732d323d9ebadf946c75fe8583444defe9d3,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,State:CONTAINER_RUNNING,CreatedAt:1722903313860213377,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-pause-161508,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: aa0ba9a109192f9bf83e28dceb8ed1ab,},Annotations:map[string]string{io.kubernetes.container.hash: bf72a8bf,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy:
File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d7a29ddb2d7a6b8db6a21aa6442f10a220f961e45a0453bef7e140494e61f546,PodSandboxId:0a9567f716680b7eac2daf2c025fc1a51bb9618cc918b6ec21eedb02307b2a2b,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1722903297750733768,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-9wwqk,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 111220a5-a088-4652-a1a3-284f2d1b111b,},Annotations:map[string]string{io.kubernetes.container.hash: 227892e4,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"contain
erPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:adcecbbd6a938c51103d7edc01cd0855e22c469f90e20bf3e4a76fbd715a4744,PodSandboxId:11fed89ca356a76abf9f5cf4a8cb9b1d34a89a2c434ff78a4f706070f378a78c,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,State:CONTAINER_EXITED,CreatedAt:1722903296535177581,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-55wbx,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d90e043a-0525-4e59-9712-70116590d766,},Annotations:map[string]string{io.kubernetes.container.hash: acb1bb
23,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7d8cf53ea71f671cd11c77d76585125000808e1e5e9dbdf057515fae3694c8c2,PodSandboxId:b9198d20e0c75cff4e61b5ff0ad932276cd4bd88de410bc9dbe4420f7e14b591,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,State:CONTAINER_EXITED,CreatedAt:1722903296536992644,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-pause-161508,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e677cb0bf72cff2cfe647e5180a645c6,},Annotations:map[string]string{io.kubernetes.container.hash: 7337c8d9,io.kubernetes.co
ntainer.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6c3e3869967dcdea9538e99cfba9fa7cbeab8604b70330171ff36214ad65dc4f,PodSandboxId:777385c422e42d154fb7a8bb5b55b02aecb6d77ebfca355ae637275547f7ae8a,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_EXITED,CreatedAt:1722903296438781418,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-pause-161508,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 515bfc503159b4bbe954e027b35cf1cb,},Annotations:map[string]string{io.kubernetes.container.hash: 574d5a6a,io.kubernetes.container.restartCount: 1,io.kubernetes.container.t
erminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1bf2df2d254dca2dd27d3eae24da873f45a9ff1fbdfc0ea1dd1a35201bcd069a,PodSandboxId:9c50be63bb0e17758fb1fc280928e9a5bdd051b8a4babb033e39846cb22d746b,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,State:CONTAINER_EXITED,CreatedAt:1722903296303133201,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-pause-161508,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e853443f8265426dc355b3c076e12bba,},Annotations:map[string]string{io.kubernetes.container.hash: bcb4918f,io.kubernetes.container.restartCount: 1,io.kubernetes.con
tainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b5f13fe4c6e99948bd3db06aa7e20e2aa8073f836fe73e27f62926299efa70db,PodSandboxId:8c802c9490a1a015c30e657e438b732d323d9ebadf946c75fe8583444defe9d3,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,State:CONTAINER_EXITED,CreatedAt:1722903296335835381,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-pause-161508,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: aa0ba9a109192f9bf83e28dceb8ed1ab,},Annotations:map[string]string{io.kubernetes.container.hash: bf72a8bf,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationM
essagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e7bde654f01ecd95054cba7e1831b15349cfc28b44f4f1a6722bec18d022099a,PodSandboxId:cdbab9ce1e914d71878d039e4d5f1059541433a0180f911897309405ae8b389a,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1722903240464916282,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-9wwqk,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 111220a5-a088-4652-a1a3-284f2d1b111b,},Annotations:map[string]string{io.kubernetes.container.hash: 227892e4,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns
-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=03b7c4a5-4543-4159-8d54-09ae12a407ed name=/runtime.v1.RuntimeService/ListContainers
	Aug 06 00:15:34 pause-161508 crio[2471]: time="2024-08-06 00:15:34.920060461Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=8636b0e3-041b-4c61-b91c-3ae53f6f631a name=/runtime.v1.RuntimeService/Version
	Aug 06 00:15:34 pause-161508 crio[2471]: time="2024-08-06 00:15:34.920162193Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=8636b0e3-041b-4c61-b91c-3ae53f6f631a name=/runtime.v1.RuntimeService/Version
	Aug 06 00:15:34 pause-161508 crio[2471]: time="2024-08-06 00:15:34.922227386Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=ca4dcc8c-d74d-4f7b-a2a3-930388d7000c name=/runtime.v1.ImageService/ImageFsInfo
	Aug 06 00:15:34 pause-161508 crio[2471]: time="2024-08-06 00:15:34.925797809Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1722903334925764186,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:124365,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=ca4dcc8c-d74d-4f7b-a2a3-930388d7000c name=/runtime.v1.ImageService/ImageFsInfo
	Aug 06 00:15:34 pause-161508 crio[2471]: time="2024-08-06 00:15:34.926926827Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=f6ae62cc-d206-42d5-8ec5-7973a779a198 name=/runtime.v1.RuntimeService/ListContainers
	Aug 06 00:15:34 pause-161508 crio[2471]: time="2024-08-06 00:15:34.927085530Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=f6ae62cc-d206-42d5-8ec5-7973a779a198 name=/runtime.v1.RuntimeService/ListContainers
	Aug 06 00:15:34 pause-161508 crio[2471]: time="2024-08-06 00:15:34.927366801Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:bb38fda641e398e7269c4fc98840654d4ef417ccc04c0dbf6c34580362b741dc,PodSandboxId:11fed89ca356a76abf9f5cf4a8cb9b1d34a89a2c434ff78a4f706070f378a78c,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:2,},Image:&ImageSpec{Image:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,State:CONTAINER_RUNNING,CreatedAt:1722903317683439547,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-55wbx,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d90e043a-0525-4e59-9712-70116590d766,},Annotations:map[string]string{io.kubernetes.container.hash: acb1bb23,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessageP
ath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4086fd17ccf0a3abca003e8a74c3e9407ee2b4f844d50f018f01889b004f2e72,PodSandboxId:777385c422e42d154fb7a8bb5b55b02aecb6d77ebfca355ae637275547f7ae8a,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1722903313913286196,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-pause-161508,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 515bfc503159b4bbe954e027b35cf1cb,},Annotations:map[string]string{io.kubernetes.container.hash: 574d5a6a,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.containe
r.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:982476c4266b39f507a2b02b008aa89568d49d4e23c11d16111623b19660630c,PodSandboxId:b9198d20e0c75cff4e61b5ff0ad932276cd4bd88de410bc9dbe4420f7e14b591,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,State:CONTAINER_RUNNING,CreatedAt:1722903313892238688,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-pause-161508,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e677cb0bf72cff2cfe647e5180a645c6,},Annotations:map[string]string{io.kubernetes.container.hash: 7337c8d9,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessa
gePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8549cef6ca6f2186a15e55ba9b40db7f6b2948b5ae1430b198aaf36324fe4d12,PodSandboxId:9c50be63bb0e17758fb1fc280928e9a5bdd051b8a4babb033e39846cb22d746b,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,State:CONTAINER_RUNNING,CreatedAt:1722903313862725842,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-pause-161508,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e853443f8265426dc355b3c076e12bba,},Annotations:map[string]string{io.kubernetes.container.hash: bcb4918f,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.termina
tionMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:bc6df04b5cb9b90f3374c1e2cd15ec1fb3a0df999fa901662eecfe2bb3d6ee58,PodSandboxId:8c802c9490a1a015c30e657e438b732d323d9ebadf946c75fe8583444defe9d3,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,State:CONTAINER_RUNNING,CreatedAt:1722903313860213377,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-pause-161508,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: aa0ba9a109192f9bf83e28dceb8ed1ab,},Annotations:map[string]string{io.kubernetes.container.hash: bf72a8bf,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy:
File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d7a29ddb2d7a6b8db6a21aa6442f10a220f961e45a0453bef7e140494e61f546,PodSandboxId:0a9567f716680b7eac2daf2c025fc1a51bb9618cc918b6ec21eedb02307b2a2b,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1722903297750733768,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-9wwqk,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 111220a5-a088-4652-a1a3-284f2d1b111b,},Annotations:map[string]string{io.kubernetes.container.hash: 227892e4,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"contain
erPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:adcecbbd6a938c51103d7edc01cd0855e22c469f90e20bf3e4a76fbd715a4744,PodSandboxId:11fed89ca356a76abf9f5cf4a8cb9b1d34a89a2c434ff78a4f706070f378a78c,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,State:CONTAINER_EXITED,CreatedAt:1722903296535177581,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-55wbx,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d90e043a-0525-4e59-9712-70116590d766,},Annotations:map[string]string{io.kubernetes.container.hash: acb1bb
23,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7d8cf53ea71f671cd11c77d76585125000808e1e5e9dbdf057515fae3694c8c2,PodSandboxId:b9198d20e0c75cff4e61b5ff0ad932276cd4bd88de410bc9dbe4420f7e14b591,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,State:CONTAINER_EXITED,CreatedAt:1722903296536992644,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-pause-161508,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e677cb0bf72cff2cfe647e5180a645c6,},Annotations:map[string]string{io.kubernetes.container.hash: 7337c8d9,io.kubernetes.co
ntainer.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6c3e3869967dcdea9538e99cfba9fa7cbeab8604b70330171ff36214ad65dc4f,PodSandboxId:777385c422e42d154fb7a8bb5b55b02aecb6d77ebfca355ae637275547f7ae8a,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_EXITED,CreatedAt:1722903296438781418,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-pause-161508,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 515bfc503159b4bbe954e027b35cf1cb,},Annotations:map[string]string{io.kubernetes.container.hash: 574d5a6a,io.kubernetes.container.restartCount: 1,io.kubernetes.container.t
erminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1bf2df2d254dca2dd27d3eae24da873f45a9ff1fbdfc0ea1dd1a35201bcd069a,PodSandboxId:9c50be63bb0e17758fb1fc280928e9a5bdd051b8a4babb033e39846cb22d746b,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,State:CONTAINER_EXITED,CreatedAt:1722903296303133201,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-pause-161508,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e853443f8265426dc355b3c076e12bba,},Annotations:map[string]string{io.kubernetes.container.hash: bcb4918f,io.kubernetes.container.restartCount: 1,io.kubernetes.con
tainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b5f13fe4c6e99948bd3db06aa7e20e2aa8073f836fe73e27f62926299efa70db,PodSandboxId:8c802c9490a1a015c30e657e438b732d323d9ebadf946c75fe8583444defe9d3,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,State:CONTAINER_EXITED,CreatedAt:1722903296335835381,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-pause-161508,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: aa0ba9a109192f9bf83e28dceb8ed1ab,},Annotations:map[string]string{io.kubernetes.container.hash: bf72a8bf,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationM
essagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e7bde654f01ecd95054cba7e1831b15349cfc28b44f4f1a6722bec18d022099a,PodSandboxId:cdbab9ce1e914d71878d039e4d5f1059541433a0180f911897309405ae8b389a,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1722903240464916282,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-9wwqk,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 111220a5-a088-4652-a1a3-284f2d1b111b,},Annotations:map[string]string{io.kubernetes.container.hash: 227892e4,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns
-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=f6ae62cc-d206-42d5-8ec5-7973a779a198 name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                              CREATED              STATE               NAME                      ATTEMPT             POD ID              POD
	bb38fda641e39       55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1   17 seconds ago       Running             kube-proxy                2                   11fed89ca356a       kube-proxy-55wbx
	4086fd17ccf0a       3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899   21 seconds ago       Running             etcd                      2                   777385c422e42       etcd-pause-161508
	982476c4266b3       3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2   21 seconds ago       Running             kube-scheduler            2                   b9198d20e0c75       kube-scheduler-pause-161508
	8549cef6ca6f2       76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e   21 seconds ago       Running             kube-controller-manager   2                   9c50be63bb0e1       kube-controller-manager-pause-161508
	bc6df04b5cb9b       1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d   21 seconds ago       Running             kube-apiserver            2                   8c802c9490a1a       kube-apiserver-pause-161508
	d7a29ddb2d7a6       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4   37 seconds ago       Running             coredns                   1                   0a9567f716680       coredns-7db6d8ff4d-9wwqk
	7d8cf53ea71f6       3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2   38 seconds ago       Exited              kube-scheduler            1                   b9198d20e0c75       kube-scheduler-pause-161508
	adcecbbd6a938       55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1   38 seconds ago       Exited              kube-proxy                1                   11fed89ca356a       kube-proxy-55wbx
	6c3e3869967dc       3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899   38 seconds ago       Exited              etcd                      1                   777385c422e42       etcd-pause-161508
	b5f13fe4c6e99       1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d   38 seconds ago       Exited              kube-apiserver            1                   8c802c9490a1a       kube-apiserver-pause-161508
	1bf2df2d254dc       76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e   38 seconds ago       Exited              kube-controller-manager   1                   9c50be63bb0e1       kube-controller-manager-pause-161508
	e7bde654f01ec       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4   About a minute ago   Exited              coredns                   0                   cdbab9ce1e914       coredns-7db6d8ff4d-9wwqk
	
	
	==> coredns [d7a29ddb2d7a6b8db6a21aa6442f10a220f961e45a0453bef7e140494e61f546] <==
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 6c8bd46af3d98e03c4ae8e438c65dd0c69a5f817565481bcf1725dd66ff794963b7938c81e3a23d4c2ad9e52f818076e819219c79e8007dd90564767ed68ba4c
	CoreDNS-1.11.1
	linux/amd64, go1.20.7, ae2bbc2
	[INFO] 127.0.0.1:33467 - 42429 "HINFO IN 3203146776900514644.2698836118367998909. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.01026401s
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Service: unknown (get services)
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Namespace: unknown (get namespaces)
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.EndpointSlice: unknown (get endpointslices.discovery.k8s.io)
	
	
	==> coredns [e7bde654f01ecd95054cba7e1831b15349cfc28b44f4f1a6722bec18d022099a] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 6c8bd46af3d98e03c4ae8e438c65dd0c69a5f817565481bcf1725dd66ff794963b7938c81e3a23d4c2ad9e52f818076e819219c79e8007dd90564767ed68ba4c
	CoreDNS-1.11.1
	linux/amd64, go1.20.7, ae2bbc2
	[INFO] 127.0.0.1:41319 - 25821 "HINFO IN 2717171734076828573.5468262155880170471. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.014901881s
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[INFO] plugin/kubernetes: Trace[1233790655]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231 (06-Aug-2024 00:14:00.688) (total time: 30001ms):
	Trace[1233790655]: ---"Objects listed" error:Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout 30000ms (00:14:30.689)
	Trace[1233790655]: [30.001861477s] [30.001861477s] END
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[INFO] plugin/kubernetes: Trace[97412699]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231 (06-Aug-2024 00:14:00.690) (total time: 30000ms):
	Trace[97412699]: ---"Objects listed" error:Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout 30000ms (00:14:30.690)
	Trace[97412699]: [30.000839656s] [30.000839656s] END
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[INFO] plugin/kubernetes: Trace[382621574]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231 (06-Aug-2024 00:14:00.689) (total time: 30001ms):
	Trace[382621574]: ---"Objects listed" error:Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout 30000ms (00:14:30.689)
	Trace[382621574]: [30.001726412s] [30.001726412s] END
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/health: Going into lameduck mode for 5s
	
	
	==> describe nodes <==
	Name:               pause-161508
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=pause-161508
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=a179f531dd2dbe55e0d6074abcbc378280f91bb4
	                    minikube.k8s.io/name=pause-161508
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_08_06T00_13_45_0700
	                    minikube.k8s.io/version=v1.33.1
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Tue, 06 Aug 2024 00:13:42 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  pause-161508
	  AcquireTime:     <unset>
	  RenewTime:       Tue, 06 Aug 2024 00:15:27 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Tue, 06 Aug 2024 00:15:17 +0000   Tue, 06 Aug 2024 00:13:40 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Tue, 06 Aug 2024 00:15:17 +0000   Tue, 06 Aug 2024 00:13:40 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Tue, 06 Aug 2024 00:15:17 +0000   Tue, 06 Aug 2024 00:13:40 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Tue, 06 Aug 2024 00:15:17 +0000   Tue, 06 Aug 2024 00:13:45 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.118
	  Hostname:    pause-161508
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2015704Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2015704Ki
	  pods:               110
	System Info:
	  Machine ID:                 5ef8fca4ccaf4cb494720ebb268ed59b
	  System UUID:                5ef8fca4-ccaf-4cb4-9472-0ebb268ed59b
	  Boot ID:                    82a52a91-2eab-4313-92db-b2c395de80bd
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.30.3
	  Kube-Proxy Version:         v1.30.3
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (6 in total)
	  Namespace                   Name                                    CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                    ------------  ----------  ---------------  -------------  ---
	  kube-system                 coredns-7db6d8ff4d-9wwqk                100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     96s
	  kube-system                 etcd-pause-161508                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (5%!)(MISSING)       0 (0%!)(MISSING)         111s
	  kube-system                 kube-apiserver-pause-161508             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         111s
	  kube-system                 kube-controller-manager-pause-161508    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         111s
	  kube-system                 kube-proxy-55wbx                        0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         96s
	  kube-system                 kube-scheduler-pause-161508             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         111s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%!)(MISSING)  0 (0%!)(MISSING)
	  memory             170Mi (8%!)(MISSING)  170Mi (8%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                  From             Message
	  ----    ------                   ----                 ----             -------
	  Normal  Starting                 94s                  kube-proxy       
	  Normal  Starting                 17s                  kube-proxy       
	  Normal  Starting                 34s                  kube-proxy       
	  Normal  Starting                 117s                 kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  117s (x8 over 117s)  kubelet          Node pause-161508 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    117s (x8 over 117s)  kubelet          Node pause-161508 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     117s (x7 over 117s)  kubelet          Node pause-161508 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  117s                 kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasNoDiskPressure    111s                 kubelet          Node pause-161508 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientMemory  111s                 kubelet          Node pause-161508 status is now: NodeHasSufficientMemory
	  Normal  NodeHasSufficientPID     111s                 kubelet          Node pause-161508 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  111s                 kubelet          Updated Node Allocatable limit across pods
	  Normal  Starting                 111s                 kubelet          Starting kubelet.
	  Normal  NodeReady                110s                 kubelet          Node pause-161508 status is now: NodeReady
	  Normal  RegisteredNode           97s                  node-controller  Node pause-161508 event: Registered Node pause-161508 in Controller
	  Normal  Starting                 22s                  kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  22s (x8 over 22s)    kubelet          Node pause-161508 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    22s (x8 over 22s)    kubelet          Node pause-161508 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     22s (x7 over 22s)    kubelet          Node pause-161508 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  22s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           5s                   node-controller  Node pause-161508 event: Registered Node pause-161508 in Controller
	
	
	==> dmesg <==
	[  +8.233226] systemd-fstab-generator[604]: Ignoring "noauto" option for root device
	[  +0.061945] kauditd_printk_skb: 1 callbacks suppressed
	[  +0.055087] systemd-fstab-generator[616]: Ignoring "noauto" option for root device
	[  +0.174348] systemd-fstab-generator[630]: Ignoring "noauto" option for root device
	[  +0.179878] systemd-fstab-generator[643]: Ignoring "noauto" option for root device
	[  +0.309043] systemd-fstab-generator[673]: Ignoring "noauto" option for root device
	[  +4.774092] systemd-fstab-generator[768]: Ignoring "noauto" option for root device
	[  +0.067731] kauditd_printk_skb: 130 callbacks suppressed
	[  +4.074764] systemd-fstab-generator[951]: Ignoring "noauto" option for root device
	[  +1.091143] kauditd_printk_skb: 57 callbacks suppressed
	[  +4.990784] systemd-fstab-generator[1289]: Ignoring "noauto" option for root device
	[  +0.098966] kauditd_printk_skb: 30 callbacks suppressed
	[ +14.741716] systemd-fstab-generator[1519]: Ignoring "noauto" option for root device
	[  +0.154545] kauditd_printk_skb: 21 callbacks suppressed
	[Aug 6 00:14] kauditd_printk_skb: 84 callbacks suppressed
	[ +42.628608] systemd-fstab-generator[2389]: Ignoring "noauto" option for root device
	[  +0.156399] systemd-fstab-generator[2401]: Ignoring "noauto" option for root device
	[  +0.236133] systemd-fstab-generator[2415]: Ignoring "noauto" option for root device
	[  +0.182052] systemd-fstab-generator[2427]: Ignoring "noauto" option for root device
	[  +0.317694] systemd-fstab-generator[2455]: Ignoring "noauto" option for root device
	[  +2.061400] systemd-fstab-generator[2766]: Ignoring "noauto" option for root device
	[Aug 6 00:15] kauditd_printk_skb: 195 callbacks suppressed
	[ +10.990703] systemd-fstab-generator[3379]: Ignoring "noauto" option for root device
	[ +17.423311] systemd-fstab-generator[3773]: Ignoring "noauto" option for root device
	[  +0.067034] kauditd_printk_skb: 46 callbacks suppressed
	
	
	==> etcd [4086fd17ccf0a3abca003e8a74c3e9407ee2b4f844d50f018f01889b004f2e72] <==
	{"level":"info","ts":"2024-08-06T00:15:14.33504Z","caller":"fileutil/purge.go:50","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/snap","suffix":"snap","max":5,"interval":"30s"}
	{"level":"info","ts":"2024-08-06T00:15:14.335592Z","caller":"fileutil/purge.go:50","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/wal","suffix":"wal","max":5,"interval":"30s"}
	{"level":"info","ts":"2024-08-06T00:15:14.335908Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"86c29206b457f123 switched to configuration voters=(9710484304057332003)"}
	{"level":"info","ts":"2024-08-06T00:15:14.359437Z","caller":"membership/cluster.go:421","msg":"added member","cluster-id":"56e4fbef5627b38f","local-member-id":"86c29206b457f123","added-peer-id":"86c29206b457f123","added-peer-peer-urls":["https://192.168.39.118:2380"]}
	{"level":"info","ts":"2024-08-06T00:15:14.359674Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"56e4fbef5627b38f","local-member-id":"86c29206b457f123","cluster-version":"3.5"}
	{"level":"info","ts":"2024-08-06T00:15:14.359729Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2024-08-06T00:15:14.36122Z","caller":"embed/etcd.go:726","msg":"starting with client TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
	{"level":"info","ts":"2024-08-06T00:15:14.370815Z","caller":"embed/etcd.go:277","msg":"now serving peer/client/metrics","local-member-id":"86c29206b457f123","initial-advertise-peer-urls":["https://192.168.39.118:2380"],"listen-peer-urls":["https://192.168.39.118:2380"],"advertise-client-urls":["https://192.168.39.118:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.168.39.118:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2024-08-06T00:15:14.370952Z","caller":"embed/etcd.go:857","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2024-08-06T00:15:14.364797Z","caller":"embed/etcd.go:597","msg":"serving peer traffic","address":"192.168.39.118:2380"}
	{"level":"info","ts":"2024-08-06T00:15:14.371072Z","caller":"embed/etcd.go:569","msg":"cmux::serve","address":"192.168.39.118:2380"}
	{"level":"info","ts":"2024-08-06T00:15:15.896345Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"86c29206b457f123 is starting a new election at term 3"}
	{"level":"info","ts":"2024-08-06T00:15:15.896417Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"86c29206b457f123 became pre-candidate at term 3"}
	{"level":"info","ts":"2024-08-06T00:15:15.896453Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"86c29206b457f123 received MsgPreVoteResp from 86c29206b457f123 at term 3"}
	{"level":"info","ts":"2024-08-06T00:15:15.896469Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"86c29206b457f123 became candidate at term 4"}
	{"level":"info","ts":"2024-08-06T00:15:15.896477Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"86c29206b457f123 received MsgVoteResp from 86c29206b457f123 at term 4"}
	{"level":"info","ts":"2024-08-06T00:15:15.896488Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"86c29206b457f123 became leader at term 4"}
	{"level":"info","ts":"2024-08-06T00:15:15.896497Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: 86c29206b457f123 elected leader 86c29206b457f123 at term 4"}
	{"level":"info","ts":"2024-08-06T00:15:15.903042Z","caller":"etcdserver/server.go:2068","msg":"published local member to cluster through raft","local-member-id":"86c29206b457f123","local-member-attributes":"{Name:pause-161508 ClientURLs:[https://192.168.39.118:2379]}","request-path":"/0/members/86c29206b457f123/attributes","cluster-id":"56e4fbef5627b38f","publish-timeout":"7s"}
	{"level":"info","ts":"2024-08-06T00:15:15.903108Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-08-06T00:15:15.903234Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-08-06T00:15:15.903697Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2024-08-06T00:15:15.903754Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2024-08-06T00:15:15.905385Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.39.118:2379"}
	{"level":"info","ts":"2024-08-06T00:15:15.905669Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	
	
	==> etcd [6c3e3869967dcdea9538e99cfba9fa7cbeab8604b70330171ff36214ad65dc4f] <==
	{"level":"info","ts":"2024-08-06T00:14:58.237044Z","caller":"embed/etcd.go:569","msg":"cmux::serve","address":"192.168.39.118:2380"}
	{"level":"info","ts":"2024-08-06T00:14:59.734339Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"86c29206b457f123 is starting a new election at term 2"}
	{"level":"info","ts":"2024-08-06T00:14:59.734401Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"86c29206b457f123 became pre-candidate at term 2"}
	{"level":"info","ts":"2024-08-06T00:14:59.734451Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"86c29206b457f123 received MsgPreVoteResp from 86c29206b457f123 at term 2"}
	{"level":"info","ts":"2024-08-06T00:14:59.73447Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"86c29206b457f123 became candidate at term 3"}
	{"level":"info","ts":"2024-08-06T00:14:59.734478Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"86c29206b457f123 received MsgVoteResp from 86c29206b457f123 at term 3"}
	{"level":"info","ts":"2024-08-06T00:14:59.734488Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"86c29206b457f123 became leader at term 3"}
	{"level":"info","ts":"2024-08-06T00:14:59.734498Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: 86c29206b457f123 elected leader 86c29206b457f123 at term 3"}
	{"level":"info","ts":"2024-08-06T00:14:59.740513Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-08-06T00:14:59.740468Z","caller":"etcdserver/server.go:2068","msg":"published local member to cluster through raft","local-member-id":"86c29206b457f123","local-member-attributes":"{Name:pause-161508 ClientURLs:[https://192.168.39.118:2379]}","request-path":"/0/members/86c29206b457f123/attributes","cluster-id":"56e4fbef5627b38f","publish-timeout":"7s"}
	{"level":"info","ts":"2024-08-06T00:14:59.741621Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-08-06T00:14:59.74188Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2024-08-06T00:14:59.741896Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2024-08-06T00:14:59.743751Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.39.118:2379"}
	{"level":"info","ts":"2024-08-06T00:14:59.744151Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2024-08-06T00:15:01.441715Z","caller":"osutil/interrupt_unix.go:64","msg":"received signal; shutting down","signal":"terminated"}
	{"level":"info","ts":"2024-08-06T00:15:01.441787Z","caller":"embed/etcd.go:375","msg":"closing etcd server","name":"pause-161508","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.39.118:2380"],"advertise-client-urls":["https://192.168.39.118:2379"]}
	{"level":"warn","ts":"2024-08-06T00:15:01.441867Z","caller":"embed/serve.go:212","msg":"stopping secure grpc server due to error","error":"accept tcp 192.168.39.118:2379: use of closed network connection"}
	{"level":"warn","ts":"2024-08-06T00:15:01.441907Z","caller":"embed/serve.go:214","msg":"stopped secure grpc server due to error","error":"accept tcp 192.168.39.118:2379: use of closed network connection"}
	{"level":"warn","ts":"2024-08-06T00:15:01.443226Z","caller":"embed/serve.go:212","msg":"stopping secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
	{"level":"warn","ts":"2024-08-06T00:15:01.443325Z","caller":"embed/serve.go:214","msg":"stopped secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
	{"level":"info","ts":"2024-08-06T00:15:01.468325Z","caller":"etcdserver/server.go:1471","msg":"skipped leadership transfer for single voting member cluster","local-member-id":"86c29206b457f123","current-leader-member-id":"86c29206b457f123"}
	{"level":"info","ts":"2024-08-06T00:15:01.476372Z","caller":"embed/etcd.go:579","msg":"stopping serving peer traffic","address":"192.168.39.118:2380"}
	{"level":"info","ts":"2024-08-06T00:15:01.476616Z","caller":"embed/etcd.go:584","msg":"stopped serving peer traffic","address":"192.168.39.118:2380"}
	{"level":"info","ts":"2024-08-06T00:15:01.47663Z","caller":"embed/etcd.go:377","msg":"closed etcd server","name":"pause-161508","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.39.118:2380"],"advertise-client-urls":["https://192.168.39.118:2379"]}
	
	
	==> kernel <==
	 00:15:35 up 2 min,  0 users,  load average: 0.78, 0.32, 0.12
	Linux pause-161508 5.10.207 #1 SMP Mon Jul 29 15:19:02 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kube-apiserver [b5f13fe4c6e99948bd3db06aa7e20e2aa8073f836fe73e27f62926299efa70db] <==
	W0806 00:15:10.715829       1 logging.go:59] [core] [Channel #166 SubChannel #167] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0806 00:15:10.734882       1 logging.go:59] [core] [Channel #91 SubChannel #92] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0806 00:15:10.744080       1 logging.go:59] [core] [Channel #115 SubChannel #116] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0806 00:15:10.745607       1 logging.go:59] [core] [Channel #154 SubChannel #155] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0806 00:15:10.756926       1 logging.go:59] [core] [Channel #64 SubChannel #65] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0806 00:15:10.758407       1 logging.go:59] [core] [Channel #2 SubChannel #4] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0806 00:15:10.761067       1 logging.go:59] [core] [Channel #97 SubChannel #98] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0806 00:15:10.770177       1 logging.go:59] [core] [Channel #148 SubChannel #149] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0806 00:15:10.861145       1 logging.go:59] [core] [Channel #178 SubChannel #179] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0806 00:15:10.868174       1 logging.go:59] [core] [Channel #10 SubChannel #11] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0806 00:15:10.886010       1 logging.go:59] [core] [Channel #55 SubChannel #56] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0806 00:15:10.922363       1 logging.go:59] [core] [Channel #103 SubChannel #104] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0806 00:15:10.957873       1 logging.go:59] [core] [Channel #133 SubChannel #134] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0806 00:15:10.980173       1 logging.go:59] [core] [Channel #49 SubChannel #50] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0806 00:15:11.017777       1 logging.go:59] [core] [Channel #15 SubChannel #16] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0806 00:15:11.052290       1 logging.go:59] [core] [Channel #175 SubChannel #176] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0806 00:15:11.079236       1 logging.go:59] [core] [Channel #127 SubChannel #128] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0806 00:15:11.122637       1 logging.go:59] [core] [Channel #5 SubChannel #6] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0806 00:15:11.159418       1 logging.go:59] [core] [Channel #1 SubChannel #3] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0806 00:15:11.237890       1 logging.go:59] [core] [Channel #151 SubChannel #152] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0806 00:15:11.283014       1 logging.go:59] [core] [Channel #31 SubChannel #32] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0806 00:15:11.291202       1 logging.go:59] [core] [Channel #112 SubChannel #113] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0806 00:15:11.326767       1 logging.go:59] [core] [Channel #100 SubChannel #101] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0806 00:15:11.354907       1 logging.go:59] [core] [Channel #61 SubChannel #62] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0806 00:15:11.356406       1 logging.go:59] [core] [Channel #145 SubChannel #146] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	
	
	==> kube-apiserver [bc6df04b5cb9b90f3374c1e2cd15ec1fb3a0df999fa901662eecfe2bb3d6ee58] <==
	I0806 00:15:17.166812       1 controller.go:87] Starting OpenAPI V3 controller
	I0806 00:15:17.166890       1 naming_controller.go:291] Starting NamingConditionController
	I0806 00:15:17.166925       1 establishing_controller.go:76] Starting EstablishingController
	I0806 00:15:17.166961       1 nonstructuralschema_controller.go:192] Starting NonStructuralSchemaConditionController
	I0806 00:15:17.167022       1 apiapproval_controller.go:186] Starting KubernetesAPIApprovalPolicyConformantConditionController
	I0806 00:15:17.204063       1 cache.go:39] Caches are synced for AvailableConditionController controller
	I0806 00:15:17.204128       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I0806 00:15:17.208893       1 shared_informer.go:320] Caches are synced for *generic.policySource[*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicy,*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicyBinding,k8s.io/apiserver/pkg/admission/plugin/policy/validating.Validator]
	I0806 00:15:17.208979       1 policy_source.go:224] refreshing policies
	I0806 00:15:17.221917       1 handler_discovery.go:447] Starting ResourceDiscoveryManager
	I0806 00:15:17.222035       1 shared_informer.go:320] Caches are synced for configmaps
	I0806 00:15:17.226941       1 apf_controller.go:379] Running API Priority and Fairness config worker
	I0806 00:15:17.226978       1 apf_controller.go:382] Running API Priority and Fairness periodic rebalancing process
	I0806 00:15:17.227236       1 shared_informer.go:320] Caches are synced for cluster_authentication_trust_controller
	I0806 00:15:17.234429       1 controller.go:615] quota admission added evaluator for: leases.coordination.k8s.io
	I0806 00:15:17.246328       1 cache.go:39] Caches are synced for autoregister controller
	I0806 00:15:17.308496       1 shared_informer.go:320] Caches are synced for node_authorizer
	I0806 00:15:18.107653       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I0806 00:15:18.751435       1 controller.go:615] quota admission added evaluator for: serviceaccounts
	I0806 00:15:18.764887       1 controller.go:615] quota admission added evaluator for: deployments.apps
	I0806 00:15:18.812579       1 controller.go:615] quota admission added evaluator for: daemonsets.apps
	I0806 00:15:18.846488       1 controller.go:615] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I0806 00:15:18.853746       1 controller.go:615] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I0806 00:15:30.560150       1 controller.go:615] quota admission added evaluator for: endpoints
	I0806 00:15:30.572888       1 controller.go:615] quota admission added evaluator for: endpointslices.discovery.k8s.io
	
	
	==> kube-controller-manager [1bf2df2d254dca2dd27d3eae24da873f45a9ff1fbdfc0ea1dd1a35201bcd069a] <==
	I0806 00:14:58.507409       1 serving.go:380] Generated self-signed cert in-memory
	I0806 00:14:58.769450       1 controllermanager.go:189] "Starting" version="v1.30.3"
	I0806 00:14:58.769491       1 controllermanager.go:191] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0806 00:14:58.771374       1 secure_serving.go:213] Serving securely on 127.0.0.1:10257
	I0806 00:14:58.771670       1 tlsconfig.go:240] "Starting DynamicServingCertificateController"
	I0806 00:14:58.771878       1 dynamic_cafile_content.go:157] "Starting controller" name="request-header::/var/lib/minikube/certs/front-proxy-ca.crt"
	I0806 00:14:58.772106       1 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/var/lib/minikube/certs/ca.crt"
	
	
	==> kube-controller-manager [8549cef6ca6f2186a15e55ba9b40db7f6b2948b5ae1430b198aaf36324fe4d12] <==
	I0806 00:15:30.440283       1 shared_informer.go:320] Caches are synced for ephemeral
	I0806 00:15:30.464363       1 shared_informer.go:320] Caches are synced for expand
	I0806 00:15:30.464791       1 shared_informer.go:320] Caches are synced for TTL
	I0806 00:15:30.488334       1 shared_informer.go:320] Caches are synced for node
	I0806 00:15:30.488694       1 range_allocator.go:175] "Sending events to api server" logger="node-ipam-controller"
	I0806 00:15:30.488904       1 range_allocator.go:179] "Starting range CIDR allocator" logger="node-ipam-controller"
	I0806 00:15:30.488941       1 shared_informer.go:313] Waiting for caches to sync for cidrallocator
	I0806 00:15:30.488955       1 shared_informer.go:320] Caches are synced for cidrallocator
	I0806 00:15:30.489136       1 shared_informer.go:320] Caches are synced for persistent volume
	I0806 00:15:30.489375       1 shared_informer.go:320] Caches are synced for endpoint
	I0806 00:15:30.491103       1 shared_informer.go:320] Caches are synced for endpoint_slice_mirroring
	I0806 00:15:30.491240       1 shared_informer.go:320] Caches are synced for certificate-csrsigning-legacy-unknown
	I0806 00:15:30.505835       1 shared_informer.go:320] Caches are synced for ReplicaSet
	I0806 00:15:30.506057       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="113.389µs"
	I0806 00:15:30.519644       1 shared_informer.go:320] Caches are synced for disruption
	I0806 00:15:30.528692       1 shared_informer.go:320] Caches are synced for endpoint_slice
	I0806 00:15:30.553692       1 shared_informer.go:320] Caches are synced for resource quota
	I0806 00:15:30.567171       1 shared_informer.go:320] Caches are synced for resource quota
	I0806 00:15:30.570428       1 shared_informer.go:320] Caches are synced for crt configmap
	I0806 00:15:30.618994       1 shared_informer.go:320] Caches are synced for bootstrap_signer
	I0806 00:15:30.638027       1 shared_informer.go:320] Caches are synced for ClusterRoleAggregator
	I0806 00:15:30.660492       1 shared_informer.go:320] Caches are synced for attach detach
	I0806 00:15:31.097223       1 shared_informer.go:320] Caches are synced for garbage collector
	I0806 00:15:31.098610       1 shared_informer.go:320] Caches are synced for garbage collector
	I0806 00:15:31.098679       1 garbagecollector.go:157] "All resource monitors have synced. Proceeding to collect garbage" logger="garbage-collector-controller"
	
	
	==> kube-proxy [adcecbbd6a938c51103d7edc01cd0855e22c469f90e20bf3e4a76fbd715a4744] <==
	I0806 00:14:58.540143       1 server_linux.go:69] "Using iptables proxy"
	I0806 00:15:01.162283       1 server.go:1062] "Successfully retrieved node IP(s)" IPs=["192.168.39.118"]
	I0806 00:15:01.223251       1 server_linux.go:143] "No iptables support for family" ipFamily="IPv6"
	I0806 00:15:01.223333       1 server.go:661] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0806 00:15:01.223404       1 server_linux.go:165] "Using iptables Proxier"
	I0806 00:15:01.226436       1 proxier.go:243] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I0806 00:15:01.226740       1 server.go:872] "Version info" version="v1.30.3"
	I0806 00:15:01.226780       1 server.go:874] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0806 00:15:01.228263       1 config.go:192] "Starting service config controller"
	I0806 00:15:01.228331       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0806 00:15:01.228366       1 config.go:101] "Starting endpoint slice config controller"
	I0806 00:15:01.228389       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0806 00:15:01.229295       1 config.go:319] "Starting node config controller"
	I0806 00:15:01.229326       1 shared_informer.go:313] Waiting for caches to sync for node config
	
	
	==> kube-proxy [bb38fda641e398e7269c4fc98840654d4ef417ccc04c0dbf6c34580362b741dc] <==
	I0806 00:15:17.797479       1 server_linux.go:69] "Using iptables proxy"
	I0806 00:15:17.806358       1 server.go:1062] "Successfully retrieved node IP(s)" IPs=["192.168.39.118"]
	I0806 00:15:17.841357       1 server_linux.go:143] "No iptables support for family" ipFamily="IPv6"
	I0806 00:15:17.841443       1 server.go:661] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0806 00:15:17.841461       1 server_linux.go:165] "Using iptables Proxier"
	I0806 00:15:17.844034       1 proxier.go:243] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I0806 00:15:17.844295       1 server.go:872] "Version info" version="v1.30.3"
	I0806 00:15:17.844322       1 server.go:874] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0806 00:15:17.845359       1 config.go:192] "Starting service config controller"
	I0806 00:15:17.845432       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0806 00:15:17.845458       1 config.go:101] "Starting endpoint slice config controller"
	I0806 00:15:17.845461       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0806 00:15:17.846027       1 config.go:319] "Starting node config controller"
	I0806 00:15:17.846058       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0806 00:15:17.945622       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I0806 00:15:17.945710       1 shared_informer.go:320] Caches are synced for service config
	I0806 00:15:17.946371       1 shared_informer.go:320] Caches are synced for node config
	
	
	==> kube-scheduler [7d8cf53ea71f671cd11c77d76585125000808e1e5e9dbdf057515fae3694c8c2] <==
	I0806 00:14:58.557483       1 serving.go:380] Generated self-signed cert in-memory
	W0806 00:15:01.111136       1 requestheader_controller.go:193] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W0806 00:15:01.113702       1 authentication.go:368] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W0806 00:15:01.113837       1 authentication.go:369] Continuing without authentication configuration. This may treat all requests as anonymous.
	W0806 00:15:01.113933       1 authentication.go:370] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I0806 00:15:01.174034       1 server.go:154] "Starting Kubernetes Scheduler" version="v1.30.3"
	I0806 00:15:01.174179       1 server.go:156] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0806 00:15:01.179469       1 secure_serving.go:213] Serving securely on 127.0.0.1:10259
	I0806 00:15:01.180394       1 configmap_cafile_content.go:202] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0806 00:15:01.180446       1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	E0806 00:15:01.180465       1 shared_informer.go:316] unable to sync caches for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0806 00:15:01.180473       1 configmap_cafile_content.go:210] "Shutting down controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0806 00:15:01.180592       1 tlsconfig.go:240] "Starting DynamicServingCertificateController"
	I0806 00:15:01.180622       1 tlsconfig.go:255] "Shutting down DynamicServingCertificateController"
	I0806 00:15:01.180731       1 secure_serving.go:258] Stopped listening on 127.0.0.1:10259
	E0806 00:15:01.181208       1 server.go:214] "waiting for handlers to sync" err="context canceled"
	E0806 00:15:01.181390       1 run.go:74] "command failed" err="finished without leader elect"
	
	
	==> kube-scheduler [982476c4266b39f507a2b02b008aa89568d49d4e23c11d16111623b19660630c] <==
	I0806 00:15:15.108125       1 serving.go:380] Generated self-signed cert in-memory
	W0806 00:15:17.175012       1 requestheader_controller.go:193] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W0806 00:15:17.175372       1 authentication.go:368] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W0806 00:15:17.175432       1 authentication.go:369] Continuing without authentication configuration. This may treat all requests as anonymous.
	W0806 00:15:17.175457       1 authentication.go:370] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I0806 00:15:17.227135       1 server.go:154] "Starting Kubernetes Scheduler" version="v1.30.3"
	I0806 00:15:17.227194       1 server.go:156] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0806 00:15:17.230858       1 secure_serving.go:213] Serving securely on 127.0.0.1:10259
	I0806 00:15:17.231051       1 configmap_cafile_content.go:202] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0806 00:15:17.231099       1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0806 00:15:17.231131       1 tlsconfig.go:240] "Starting DynamicServingCertificateController"
	I0806 00:15:17.331870       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Aug 06 00:15:13 pause-161508 kubelet[3386]: I0806 00:15:13.606268    3386 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/aa0ba9a109192f9bf83e28dceb8ed1ab-usr-share-ca-certificates\") pod \"kube-apiserver-pause-161508\" (UID: \"aa0ba9a109192f9bf83e28dceb8ed1ab\") " pod="kube-system/kube-apiserver-pause-161508"
	Aug 06 00:15:13 pause-161508 kubelet[3386]: I0806 00:15:13.606290    3386 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/e853443f8265426dc355b3c076e12bba-ca-certs\") pod \"kube-controller-manager-pause-161508\" (UID: \"e853443f8265426dc355b3c076e12bba\") " pod="kube-system/kube-controller-manager-pause-161508"
	Aug 06 00:15:13 pause-161508 kubelet[3386]: I0806 00:15:13.606317    3386 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/e853443f8265426dc355b3c076e12bba-k8s-certs\") pod \"kube-controller-manager-pause-161508\" (UID: \"e853443f8265426dc355b3c076e12bba\") " pod="kube-system/kube-controller-manager-pause-161508"
	Aug 06 00:15:13 pause-161508 kubelet[3386]: I0806 00:15:13.606371    3386 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/e853443f8265426dc355b3c076e12bba-kubeconfig\") pod \"kube-controller-manager-pause-161508\" (UID: \"e853443f8265426dc355b3c076e12bba\") " pod="kube-system/kube-controller-manager-pause-161508"
	Aug 06 00:15:13 pause-161508 kubelet[3386]: I0806 00:15:13.700306    3386 kubelet_node_status.go:73] "Attempting to register node" node="pause-161508"
	Aug 06 00:15:13 pause-161508 kubelet[3386]: E0806 00:15:13.701242    3386 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://control-plane.minikube.internal:8443/api/v1/nodes\": dial tcp 192.168.39.118:8443: connect: connection refused" node="pause-161508"
	Aug 06 00:15:13 pause-161508 kubelet[3386]: I0806 00:15:13.839147    3386 scope.go:117] "RemoveContainer" containerID="6c3e3869967dcdea9538e99cfba9fa7cbeab8604b70330171ff36214ad65dc4f"
	Aug 06 00:15:13 pause-161508 kubelet[3386]: I0806 00:15:13.840383    3386 scope.go:117] "RemoveContainer" containerID="1bf2df2d254dca2dd27d3eae24da873f45a9ff1fbdfc0ea1dd1a35201bcd069a"
	Aug 06 00:15:13 pause-161508 kubelet[3386]: I0806 00:15:13.841284    3386 scope.go:117] "RemoveContainer" containerID="b5f13fe4c6e99948bd3db06aa7e20e2aa8073f836fe73e27f62926299efa70db"
	Aug 06 00:15:13 pause-161508 kubelet[3386]: I0806 00:15:13.843805    3386 scope.go:117] "RemoveContainer" containerID="7d8cf53ea71f671cd11c77d76585125000808e1e5e9dbdf057515fae3694c8c2"
	Aug 06 00:15:14 pause-161508 kubelet[3386]: E0806 00:15:14.005777    3386 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://control-plane.minikube.internal:8443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/pause-161508?timeout=10s\": dial tcp 192.168.39.118:8443: connect: connection refused" interval="800ms"
	Aug 06 00:15:14 pause-161508 kubelet[3386]: I0806 00:15:14.106201    3386 kubelet_node_status.go:73] "Attempting to register node" node="pause-161508"
	Aug 06 00:15:14 pause-161508 kubelet[3386]: E0806 00:15:14.107826    3386 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://control-plane.minikube.internal:8443/api/v1/nodes\": dial tcp 192.168.39.118:8443: connect: connection refused" node="pause-161508"
	Aug 06 00:15:14 pause-161508 kubelet[3386]: I0806 00:15:14.909385    3386 kubelet_node_status.go:73] "Attempting to register node" node="pause-161508"
	Aug 06 00:15:17 pause-161508 kubelet[3386]: I0806 00:15:17.273053    3386 kubelet_node_status.go:112] "Node was previously registered" node="pause-161508"
	Aug 06 00:15:17 pause-161508 kubelet[3386]: I0806 00:15:17.273266    3386 kubelet_node_status.go:76] "Successfully registered node" node="pause-161508"
	Aug 06 00:15:17 pause-161508 kubelet[3386]: I0806 00:15:17.275268    3386 kuberuntime_manager.go:1523] "Updating runtime config through cri with podcidr" CIDR="10.244.0.0/24"
	Aug 06 00:15:17 pause-161508 kubelet[3386]: I0806 00:15:17.276647    3386 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="10.244.0.0/24"
	Aug 06 00:15:17 pause-161508 kubelet[3386]: I0806 00:15:17.368804    3386 apiserver.go:52] "Watching apiserver"
	Aug 06 00:15:17 pause-161508 kubelet[3386]: I0806 00:15:17.371503    3386 topology_manager.go:215] "Topology Admit Handler" podUID="111220a5-a088-4652-a1a3-284f2d1b111b" podNamespace="kube-system" podName="coredns-7db6d8ff4d-9wwqk"
	Aug 06 00:15:17 pause-161508 kubelet[3386]: I0806 00:15:17.372608    3386 topology_manager.go:215] "Topology Admit Handler" podUID="d90e043a-0525-4e59-9712-70116590d766" podNamespace="kube-system" podName="kube-proxy-55wbx"
	Aug 06 00:15:17 pause-161508 kubelet[3386]: I0806 00:15:17.396598    3386 desired_state_of_world_populator.go:157] "Finished populating initial desired state of world"
	Aug 06 00:15:17 pause-161508 kubelet[3386]: I0806 00:15:17.456587    3386 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/d90e043a-0525-4e59-9712-70116590d766-xtables-lock\") pod \"kube-proxy-55wbx\" (UID: \"d90e043a-0525-4e59-9712-70116590d766\") " pod="kube-system/kube-proxy-55wbx"
	Aug 06 00:15:17 pause-161508 kubelet[3386]: I0806 00:15:17.457008    3386 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/d90e043a-0525-4e59-9712-70116590d766-lib-modules\") pod \"kube-proxy-55wbx\" (UID: \"d90e043a-0525-4e59-9712-70116590d766\") " pod="kube-system/kube-proxy-55wbx"
	Aug 06 00:15:17 pause-161508 kubelet[3386]: I0806 00:15:17.673804    3386 scope.go:117] "RemoveContainer" containerID="adcecbbd6a938c51103d7edc01cd0855e22c469f90e20bf3e4a76fbd715a4744"
	

                                                
                                                
-- /stdout --
** stderr ** 
	E0806 00:15:34.433072   62684 logs.go:258] failed to output last start logs: failed to read file /home/jenkins/minikube-integration/19373-9606/.minikube/logs/lastStart.txt: bufio.Scanner: token too long

                                                
                                                
** /stderr **
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p pause-161508 -n pause-161508
helpers_test.go:261: (dbg) Run:  kubectl --context pause-161508 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestPause/serial/SecondStartNoReconfiguration FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p pause-161508 -n pause-161508
helpers_test.go:244: <<< TestPause/serial/SecondStartNoReconfiguration FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestPause/serial/SecondStartNoReconfiguration]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p pause-161508 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p pause-161508 logs -n 25: (1.440770635s)
helpers_test.go:252: TestPause/serial/SecondStartNoReconfiguration logs: 
-- stdout --
	
	==> Audit <==
	|---------|---------------------------------------|---------------------------|---------|---------|---------------------|---------------------|
	| Command |                 Args                  |          Profile          |  User   | Version |     Start Time      |      End Time       |
	|---------|---------------------------------------|---------------------------|---------|---------|---------------------|---------------------|
	| start   | -p force-systemd-env-571298           | force-systemd-env-571298  | jenkins | v1.33.1 | 06 Aug 24 00:10 UTC | 06 Aug 24 00:11 UTC |
	|         | --memory=2048                         |                           |         |         |                     |                     |
	|         | --alsologtostderr                     |                           |         |         |                     |                     |
	|         | -v=5 --driver=kvm2                    |                           |         |         |                     |                     |
	|         | --container-runtime=crio              |                           |         |         |                     |                     |
	| start   | -p NoKubernetes-849515                | NoKubernetes-849515       | jenkins | v1.33.1 | 06 Aug 24 00:11 UTC | 06 Aug 24 00:11 UTC |
	|         | --no-kubernetes --driver=kvm2         |                           |         |         |                     |                     |
	|         | --container-runtime=crio              |                           |         |         |                     |                     |
	| delete  | -p force-systemd-env-571298           | force-systemd-env-571298  | jenkins | v1.33.1 | 06 Aug 24 00:11 UTC | 06 Aug 24 00:11 UTC |
	| delete  | -p offline-crio-820703                | offline-crio-820703       | jenkins | v1.33.1 | 06 Aug 24 00:11 UTC | 06 Aug 24 00:11 UTC |
	| start   | -p force-systemd-flag-936727          | force-systemd-flag-936727 | jenkins | v1.33.1 | 06 Aug 24 00:11 UTC | 06 Aug 24 00:12 UTC |
	|         | --memory=2048 --force-systemd         |                           |         |         |                     |                     |
	|         | --alsologtostderr                     |                           |         |         |                     |                     |
	|         | -v=5 --driver=kvm2                    |                           |         |         |                     |                     |
	|         | --container-runtime=crio              |                           |         |         |                     |                     |
	| start   | -p cert-expiration-272169             | cert-expiration-272169    | jenkins | v1.33.1 | 06 Aug 24 00:11 UTC | 06 Aug 24 00:12 UTC |
	|         | --memory=2048                         |                           |         |         |                     |                     |
	|         | --cert-expiration=3m                  |                           |         |         |                     |                     |
	|         | --driver=kvm2                         |                           |         |         |                     |                     |
	|         | --container-runtime=crio              |                           |         |         |                     |                     |
	| delete  | -p NoKubernetes-849515                | NoKubernetes-849515       | jenkins | v1.33.1 | 06 Aug 24 00:11 UTC | 06 Aug 24 00:11 UTC |
	| start   | -p NoKubernetes-849515                | NoKubernetes-849515       | jenkins | v1.33.1 | 06 Aug 24 00:11 UTC | 06 Aug 24 00:13 UTC |
	|         | --no-kubernetes --driver=kvm2         |                           |         |         |                     |                     |
	|         | --container-runtime=crio              |                           |         |         |                     |                     |
	| start   | -p running-upgrade-863913             | running-upgrade-863913    | jenkins | v1.33.1 | 06 Aug 24 00:12 UTC | 06 Aug 24 00:14 UTC |
	|         | --memory=2200                         |                           |         |         |                     |                     |
	|         | --alsologtostderr                     |                           |         |         |                     |                     |
	|         | -v=1 --driver=kvm2                    |                           |         |         |                     |                     |
	|         | --container-runtime=crio              |                           |         |         |                     |                     |
	| ssh     | force-systemd-flag-936727 ssh cat     | force-systemd-flag-936727 | jenkins | v1.33.1 | 06 Aug 24 00:12 UTC | 06 Aug 24 00:12 UTC |
	|         | /etc/crio/crio.conf.d/02-crio.conf    |                           |         |         |                     |                     |
	| delete  | -p force-systemd-flag-936727          | force-systemd-flag-936727 | jenkins | v1.33.1 | 06 Aug 24 00:12 UTC | 06 Aug 24 00:12 UTC |
	| start   | -p pause-161508 --memory=2048         | pause-161508              | jenkins | v1.33.1 | 06 Aug 24 00:12 UTC | 06 Aug 24 00:14 UTC |
	|         | --install-addons=false                |                           |         |         |                     |                     |
	|         | --wait=all --driver=kvm2              |                           |         |         |                     |                     |
	|         | --container-runtime=crio              |                           |         |         |                     |                     |
	| ssh     | -p NoKubernetes-849515 sudo           | NoKubernetes-849515       | jenkins | v1.33.1 | 06 Aug 24 00:13 UTC |                     |
	|         | systemctl is-active --quiet           |                           |         |         |                     |                     |
	|         | service kubelet                       |                           |         |         |                     |                     |
	| stop    | -p NoKubernetes-849515                | NoKubernetes-849515       | jenkins | v1.33.1 | 06 Aug 24 00:13 UTC | 06 Aug 24 00:13 UTC |
	| start   | -p NoKubernetes-849515                | NoKubernetes-849515       | jenkins | v1.33.1 | 06 Aug 24 00:13 UTC | 06 Aug 24 00:13 UTC |
	|         | --driver=kvm2                         |                           |         |         |                     |                     |
	|         | --container-runtime=crio              |                           |         |         |                     |                     |
	| ssh     | -p NoKubernetes-849515 sudo           | NoKubernetes-849515       | jenkins | v1.33.1 | 06 Aug 24 00:13 UTC |                     |
	|         | systemctl is-active --quiet           |                           |         |         |                     |                     |
	|         | service kubelet                       |                           |         |         |                     |                     |
	| delete  | -p NoKubernetes-849515                | NoKubernetes-849515       | jenkins | v1.33.1 | 06 Aug 24 00:13 UTC | 06 Aug 24 00:13 UTC |
	| start   | -p cert-options-323157                | cert-options-323157       | jenkins | v1.33.1 | 06 Aug 24 00:13 UTC | 06 Aug 24 00:14 UTC |
	|         | --memory=2048                         |                           |         |         |                     |                     |
	|         | --apiserver-ips=127.0.0.1             |                           |         |         |                     |                     |
	|         | --apiserver-ips=192.168.15.15         |                           |         |         |                     |                     |
	|         | --apiserver-names=localhost           |                           |         |         |                     |                     |
	|         | --apiserver-names=www.google.com      |                           |         |         |                     |                     |
	|         | --apiserver-port=8555                 |                           |         |         |                     |                     |
	|         | --driver=kvm2                         |                           |         |         |                     |                     |
	|         | --container-runtime=crio              |                           |         |         |                     |                     |
	| delete  | -p running-upgrade-863913             | running-upgrade-863913    | jenkins | v1.33.1 | 06 Aug 24 00:14 UTC | 06 Aug 24 00:14 UTC |
	| start   | -p kubernetes-upgrade-907863          | kubernetes-upgrade-907863 | jenkins | v1.33.1 | 06 Aug 24 00:14 UTC |                     |
	|         | --memory=2200                         |                           |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0          |                           |         |         |                     |                     |
	|         | --alsologtostderr                     |                           |         |         |                     |                     |
	|         | -v=1 --driver=kvm2                    |                           |         |         |                     |                     |
	|         | --container-runtime=crio              |                           |         |         |                     |                     |
	| start   | -p pause-161508                       | pause-161508              | jenkins | v1.33.1 | 06 Aug 24 00:14 UTC | 06 Aug 24 00:15 UTC |
	|         | --alsologtostderr                     |                           |         |         |                     |                     |
	|         | -v=1 --driver=kvm2                    |                           |         |         |                     |                     |
	|         | --container-runtime=crio              |                           |         |         |                     |                     |
	| ssh     | cert-options-323157 ssh               | cert-options-323157       | jenkins | v1.33.1 | 06 Aug 24 00:14 UTC | 06 Aug 24 00:14 UTC |
	|         | openssl x509 -text -noout -in         |                           |         |         |                     |                     |
	|         | /var/lib/minikube/certs/apiserver.crt |                           |         |         |                     |                     |
	| ssh     | -p cert-options-323157 -- sudo        | cert-options-323157       | jenkins | v1.33.1 | 06 Aug 24 00:14 UTC | 06 Aug 24 00:14 UTC |
	|         | cat /etc/kubernetes/admin.conf        |                           |         |         |                     |                     |
	| delete  | -p cert-options-323157                | cert-options-323157       | jenkins | v1.33.1 | 06 Aug 24 00:14 UTC | 06 Aug 24 00:14 UTC |
	| start   | -p stopped-upgrade-936666             | minikube                  | jenkins | v1.26.0 | 06 Aug 24 00:14 UTC |                     |
	|         | --memory=2200 --vm-driver=kvm2        |                           |         |         |                     |                     |
	|         |  --container-runtime=crio             |                           |         |         |                     |                     |
	|---------|---------------------------------------|---------------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/08/06 00:14:46
	Running on machine: ubuntu-20-agent-10
	Binary: Built with gc go1.18.3 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0806 00:14:46.662867   62278 out.go:296] Setting OutFile to fd 1 ...
	I0806 00:14:46.663143   62278 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0806 00:14:46.663148   62278 out.go:309] Setting ErrFile to fd 2...
	I0806 00:14:46.663151   62278 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0806 00:14:46.663812   62278 root.go:329] Updating PATH: /home/jenkins/minikube-integration/19373-9606/.minikube/bin
	I0806 00:14:46.664069   62278 out.go:303] Setting JSON to false
	I0806 00:14:46.664998   62278 start.go:115] hostinfo: {"hostname":"ubuntu-20-agent-10","uptime":7033,"bootTime":1722896254,"procs":216,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1065-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0806 00:14:46.665064   62278 start.go:125] virtualization: kvm guest
	I0806 00:14:46.667597   62278 out.go:177] * [stopped-upgrade-936666] minikube v1.26.0 on Ubuntu 20.04 (kvm/amd64)
	I0806 00:14:46.669191   62278 out.go:177]   - MINIKUBE_LOCATION=19373
	I0806 00:14:46.669206   62278 notify.go:193] Checking for updates...
	I0806 00:14:46.670831   62278 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0806 00:14:46.672530   62278 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19373-9606/.minikube
	I0806 00:14:46.674266   62278 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0806 00:14:46.675792   62278 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0806 00:14:46.677080   62278 out.go:177]   - KUBECONFIG=/tmp/legacy_kubeconfig3928839946
	I0806 00:14:46.678954   62278 config.go:178] Loaded profile config "cert-expiration-272169": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0806 00:14:46.679117   62278 config.go:178] Loaded profile config "kubernetes-upgrade-907863": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.20.0
	I0806 00:14:46.679337   62278 config.go:178] Loaded profile config "pause-161508": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0806 00:14:46.679434   62278 driver.go:360] Setting default libvirt URI to qemu:///system
	I0806 00:14:46.721814   62278 out.go:177] * Using the kvm2 driver based on user configuration
	I0806 00:14:46.723356   62278 start.go:284] selected driver: kvm2
	I0806 00:14:46.723371   62278 start.go:805] validating driver "kvm2" against <nil>
	I0806 00:14:46.723399   62278 start.go:816] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0806 00:14:46.724368   62278 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0806 00:14:46.724580   62278 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/19373-9606/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0806 00:14:46.741139   62278 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.33.1
	I0806 00:14:46.741248   62278 start_flags.go:296] no existing cluster config was found, will generate one from the flags 
	I0806 00:14:46.741476   62278 start_flags.go:835] Wait components to verify : map[apiserver:true system_pods:true]
	I0806 00:14:46.741498   62278 cni.go:95] Creating CNI manager for ""
	I0806 00:14:46.741509   62278 cni.go:165] "kvm2" driver + crio runtime found, recommending bridge
	I0806 00:14:46.741516   62278 start_flags.go:305] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0806 00:14:46.741524   62278 start_flags.go:310] config:
	{Name:stopped-upgrade-936666 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.32@sha256:9190bd2393eae887316c97a74370b7d5dad8f0b2ef91ac2662bc36f7ef8e0b95 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.24.1 ClusterName:stopped-upgrade-936666 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket:
NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath:}
	I0806 00:14:46.741640   62278 iso.go:128] acquiring lock: {Name:mk3d6c03f606a5ab492378ade22ea2c351c6325a Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0806 00:14:46.744089   62278 out.go:177] * Starting control plane node stopped-upgrade-936666 in cluster stopped-upgrade-936666
	I0806 00:14:46.369648   62044 machine.go:94] provisionDockerMachine start ...
	I0806 00:14:46.369671   62044 main.go:141] libmachine: (pause-161508) Calling .DriverName
	I0806 00:14:46.369843   62044 main.go:141] libmachine: (pause-161508) Calling .GetSSHHostname
	I0806 00:14:46.373341   62044 main.go:141] libmachine: (pause-161508) DBG | domain pause-161508 has defined MAC address 52:54:00:33:8e:29 in network mk-pause-161508
	I0806 00:14:46.373936   62044 main.go:141] libmachine: (pause-161508) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:33:8e:29", ip: ""} in network mk-pause-161508: {Iface:virbr3 ExpiryTime:2024-08-06 01:13:19 +0000 UTC Type:0 Mac:52:54:00:33:8e:29 Iaid: IPaddr:192.168.39.118 Prefix:24 Hostname:pause-161508 Clientid:01:52:54:00:33:8e:29}
	I0806 00:14:46.373958   62044 main.go:141] libmachine: (pause-161508) DBG | domain pause-161508 has defined IP address 192.168.39.118 and MAC address 52:54:00:33:8e:29 in network mk-pause-161508
	I0806 00:14:46.374181   62044 main.go:141] libmachine: (pause-161508) Calling .GetSSHPort
	I0806 00:14:46.374359   62044 main.go:141] libmachine: (pause-161508) Calling .GetSSHKeyPath
	I0806 00:14:46.374515   62044 main.go:141] libmachine: (pause-161508) Calling .GetSSHKeyPath
	I0806 00:14:46.374633   62044 main.go:141] libmachine: (pause-161508) Calling .GetSSHUsername
	I0806 00:14:46.374836   62044 main.go:141] libmachine: Using SSH client type: native
	I0806 00:14:46.375099   62044 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.118 22 <nil> <nil>}
	I0806 00:14:46.375113   62044 main.go:141] libmachine: About to run SSH command:
	hostname
	I0806 00:14:46.484502   62044 main.go:141] libmachine: SSH cmd err, output: <nil>: pause-161508
	
	I0806 00:14:46.484534   62044 main.go:141] libmachine: (pause-161508) Calling .GetMachineName
	I0806 00:14:46.484794   62044 buildroot.go:166] provisioning hostname "pause-161508"
	I0806 00:14:46.484826   62044 main.go:141] libmachine: (pause-161508) Calling .GetMachineName
	I0806 00:14:46.485005   62044 main.go:141] libmachine: (pause-161508) Calling .GetSSHHostname
	I0806 00:14:46.488282   62044 main.go:141] libmachine: (pause-161508) DBG | domain pause-161508 has defined MAC address 52:54:00:33:8e:29 in network mk-pause-161508
	I0806 00:14:46.488626   62044 main.go:141] libmachine: (pause-161508) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:33:8e:29", ip: ""} in network mk-pause-161508: {Iface:virbr3 ExpiryTime:2024-08-06 01:13:19 +0000 UTC Type:0 Mac:52:54:00:33:8e:29 Iaid: IPaddr:192.168.39.118 Prefix:24 Hostname:pause-161508 Clientid:01:52:54:00:33:8e:29}
	I0806 00:14:46.488661   62044 main.go:141] libmachine: (pause-161508) DBG | domain pause-161508 has defined IP address 192.168.39.118 and MAC address 52:54:00:33:8e:29 in network mk-pause-161508
	I0806 00:14:46.488775   62044 main.go:141] libmachine: (pause-161508) Calling .GetSSHPort
	I0806 00:14:46.488956   62044 main.go:141] libmachine: (pause-161508) Calling .GetSSHKeyPath
	I0806 00:14:46.489138   62044 main.go:141] libmachine: (pause-161508) Calling .GetSSHKeyPath
	I0806 00:14:46.489280   62044 main.go:141] libmachine: (pause-161508) Calling .GetSSHUsername
	I0806 00:14:46.489470   62044 main.go:141] libmachine: Using SSH client type: native
	I0806 00:14:46.489668   62044 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.118 22 <nil> <nil>}
	I0806 00:14:46.489687   62044 main.go:141] libmachine: About to run SSH command:
	sudo hostname pause-161508 && echo "pause-161508" | sudo tee /etc/hostname
	I0806 00:14:46.627230   62044 main.go:141] libmachine: SSH cmd err, output: <nil>: pause-161508
	
	I0806 00:14:46.627262   62044 main.go:141] libmachine: (pause-161508) Calling .GetSSHHostname
	I0806 00:14:46.630624   62044 main.go:141] libmachine: (pause-161508) DBG | domain pause-161508 has defined MAC address 52:54:00:33:8e:29 in network mk-pause-161508
	I0806 00:14:46.631014   62044 main.go:141] libmachine: (pause-161508) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:33:8e:29", ip: ""} in network mk-pause-161508: {Iface:virbr3 ExpiryTime:2024-08-06 01:13:19 +0000 UTC Type:0 Mac:52:54:00:33:8e:29 Iaid: IPaddr:192.168.39.118 Prefix:24 Hostname:pause-161508 Clientid:01:52:54:00:33:8e:29}
	I0806 00:14:46.631044   62044 main.go:141] libmachine: (pause-161508) DBG | domain pause-161508 has defined IP address 192.168.39.118 and MAC address 52:54:00:33:8e:29 in network mk-pause-161508
	I0806 00:14:46.631467   62044 main.go:141] libmachine: (pause-161508) Calling .GetSSHPort
	I0806 00:14:46.631646   62044 main.go:141] libmachine: (pause-161508) Calling .GetSSHKeyPath
	I0806 00:14:46.631821   62044 main.go:141] libmachine: (pause-161508) Calling .GetSSHKeyPath
	I0806 00:14:46.632028   62044 main.go:141] libmachine: (pause-161508) Calling .GetSSHUsername
	I0806 00:14:46.632193   62044 main.go:141] libmachine: Using SSH client type: native
	I0806 00:14:46.632425   62044 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.118 22 <nil> <nil>}
	I0806 00:14:46.632449   62044 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\spause-161508' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 pause-161508/g' /etc/hosts;
				else 
					echo '127.0.1.1 pause-161508' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0806 00:14:46.752560   62044 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0806 00:14:46.752602   62044 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19373-9606/.minikube CaCertPath:/home/jenkins/minikube-integration/19373-9606/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19373-9606/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19373-9606/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19373-9606/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19373-9606/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19373-9606/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19373-9606/.minikube}
	I0806 00:14:46.752646   62044 buildroot.go:174] setting up certificates
	I0806 00:14:46.752658   62044 provision.go:84] configureAuth start
	I0806 00:14:46.752672   62044 main.go:141] libmachine: (pause-161508) Calling .GetMachineName
	I0806 00:14:46.752976   62044 main.go:141] libmachine: (pause-161508) Calling .GetIP
	I0806 00:14:46.755947   62044 main.go:141] libmachine: (pause-161508) DBG | domain pause-161508 has defined MAC address 52:54:00:33:8e:29 in network mk-pause-161508
	I0806 00:14:46.756352   62044 main.go:141] libmachine: (pause-161508) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:33:8e:29", ip: ""} in network mk-pause-161508: {Iface:virbr3 ExpiryTime:2024-08-06 01:13:19 +0000 UTC Type:0 Mac:52:54:00:33:8e:29 Iaid: IPaddr:192.168.39.118 Prefix:24 Hostname:pause-161508 Clientid:01:52:54:00:33:8e:29}
	I0806 00:14:46.756377   62044 main.go:141] libmachine: (pause-161508) DBG | domain pause-161508 has defined IP address 192.168.39.118 and MAC address 52:54:00:33:8e:29 in network mk-pause-161508
	I0806 00:14:46.756591   62044 main.go:141] libmachine: (pause-161508) Calling .GetSSHHostname
	I0806 00:14:46.759702   62044 main.go:141] libmachine: (pause-161508) DBG | domain pause-161508 has defined MAC address 52:54:00:33:8e:29 in network mk-pause-161508
	I0806 00:14:46.760112   62044 main.go:141] libmachine: (pause-161508) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:33:8e:29", ip: ""} in network mk-pause-161508: {Iface:virbr3 ExpiryTime:2024-08-06 01:13:19 +0000 UTC Type:0 Mac:52:54:00:33:8e:29 Iaid: IPaddr:192.168.39.118 Prefix:24 Hostname:pause-161508 Clientid:01:52:54:00:33:8e:29}
	I0806 00:14:46.760141   62044 main.go:141] libmachine: (pause-161508) DBG | domain pause-161508 has defined IP address 192.168.39.118 and MAC address 52:54:00:33:8e:29 in network mk-pause-161508
	I0806 00:14:46.760359   62044 provision.go:143] copyHostCerts
	I0806 00:14:46.760426   62044 exec_runner.go:144] found /home/jenkins/minikube-integration/19373-9606/.minikube/ca.pem, removing ...
	I0806 00:14:46.760437   62044 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19373-9606/.minikube/ca.pem
	I0806 00:14:46.760495   62044 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19373-9606/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19373-9606/.minikube/ca.pem (1082 bytes)
	I0806 00:14:46.760592   62044 exec_runner.go:144] found /home/jenkins/minikube-integration/19373-9606/.minikube/cert.pem, removing ...
	I0806 00:14:46.760601   62044 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19373-9606/.minikube/cert.pem
	I0806 00:14:46.760625   62044 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19373-9606/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19373-9606/.minikube/cert.pem (1123 bytes)
	I0806 00:14:46.760711   62044 exec_runner.go:144] found /home/jenkins/minikube-integration/19373-9606/.minikube/key.pem, removing ...
	I0806 00:14:46.760720   62044 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19373-9606/.minikube/key.pem
	I0806 00:14:46.760739   62044 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19373-9606/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19373-9606/.minikube/key.pem (1679 bytes)
	I0806 00:14:46.760810   62044 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19373-9606/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19373-9606/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19373-9606/.minikube/certs/ca-key.pem org=jenkins.pause-161508 san=[127.0.0.1 192.168.39.118 localhost minikube pause-161508]
	I0806 00:14:44.852616   61720 main.go:141] libmachine: (kubernetes-upgrade-907863) DBG | domain kubernetes-upgrade-907863 has defined MAC address 52:54:00:f6:6f:99 in network mk-kubernetes-upgrade-907863
	I0806 00:14:44.853078   61720 main.go:141] libmachine: (kubernetes-upgrade-907863) Found IP for machine: 192.168.72.112
	I0806 00:14:44.853113   61720 main.go:141] libmachine: (kubernetes-upgrade-907863) DBG | domain kubernetes-upgrade-907863 has current primary IP address 192.168.72.112 and MAC address 52:54:00:f6:6f:99 in network mk-kubernetes-upgrade-907863
	I0806 00:14:44.853123   61720 main.go:141] libmachine: (kubernetes-upgrade-907863) Reserving static IP address...
	I0806 00:14:44.853598   61720 main.go:141] libmachine: (kubernetes-upgrade-907863) DBG | unable to find host DHCP lease matching {name: "kubernetes-upgrade-907863", mac: "52:54:00:f6:6f:99", ip: "192.168.72.112"} in network mk-kubernetes-upgrade-907863
	I0806 00:14:44.937213   61720 main.go:141] libmachine: (kubernetes-upgrade-907863) DBG | Getting to WaitForSSH function...
	I0806 00:14:44.937245   61720 main.go:141] libmachine: (kubernetes-upgrade-907863) Reserved static IP address: 192.168.72.112
	I0806 00:14:44.937261   61720 main.go:141] libmachine: (kubernetes-upgrade-907863) Waiting for SSH to be available...
	I0806 00:14:44.940137   61720 main.go:141] libmachine: (kubernetes-upgrade-907863) DBG | domain kubernetes-upgrade-907863 has defined MAC address 52:54:00:f6:6f:99 in network mk-kubernetes-upgrade-907863
	I0806 00:14:44.940632   61720 main.go:141] libmachine: (kubernetes-upgrade-907863) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f6:6f:99", ip: ""} in network mk-kubernetes-upgrade-907863: {Iface:virbr4 ExpiryTime:2024-08-06 01:14:38 +0000 UTC Type:0 Mac:52:54:00:f6:6f:99 Iaid: IPaddr:192.168.72.112 Prefix:24 Hostname:minikube Clientid:01:52:54:00:f6:6f:99}
	I0806 00:14:44.940673   61720 main.go:141] libmachine: (kubernetes-upgrade-907863) DBG | domain kubernetes-upgrade-907863 has defined IP address 192.168.72.112 and MAC address 52:54:00:f6:6f:99 in network mk-kubernetes-upgrade-907863
	I0806 00:14:44.940801   61720 main.go:141] libmachine: (kubernetes-upgrade-907863) DBG | Using SSH client type: external
	I0806 00:14:44.940825   61720 main.go:141] libmachine: (kubernetes-upgrade-907863) DBG | Using SSH private key: /home/jenkins/minikube-integration/19373-9606/.minikube/machines/kubernetes-upgrade-907863/id_rsa (-rw-------)
	I0806 00:14:44.940862   61720 main.go:141] libmachine: (kubernetes-upgrade-907863) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.72.112 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19373-9606/.minikube/machines/kubernetes-upgrade-907863/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0806 00:14:44.940880   61720 main.go:141] libmachine: (kubernetes-upgrade-907863) DBG | About to run SSH command:
	I0806 00:14:44.940892   61720 main.go:141] libmachine: (kubernetes-upgrade-907863) DBG | exit 0
	I0806 00:14:45.063499   61720 main.go:141] libmachine: (kubernetes-upgrade-907863) DBG | SSH cmd err, output: <nil>: 
	I0806 00:14:45.063807   61720 main.go:141] libmachine: (kubernetes-upgrade-907863) KVM machine creation complete!
	I0806 00:14:45.064161   61720 main.go:141] libmachine: (kubernetes-upgrade-907863) Calling .GetConfigRaw
	I0806 00:14:45.064785   61720 main.go:141] libmachine: (kubernetes-upgrade-907863) Calling .DriverName
	I0806 00:14:45.064974   61720 main.go:141] libmachine: (kubernetes-upgrade-907863) Calling .DriverName
	I0806 00:14:45.065103   61720 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I0806 00:14:45.065125   61720 main.go:141] libmachine: (kubernetes-upgrade-907863) Calling .GetState
	I0806 00:14:45.066700   61720 main.go:141] libmachine: Detecting operating system of created instance...
	I0806 00:14:45.066716   61720 main.go:141] libmachine: Waiting for SSH to be available...
	I0806 00:14:45.066724   61720 main.go:141] libmachine: Getting to WaitForSSH function...
	I0806 00:14:45.066732   61720 main.go:141] libmachine: (kubernetes-upgrade-907863) Calling .GetSSHHostname
	I0806 00:14:45.069985   61720 main.go:141] libmachine: (kubernetes-upgrade-907863) DBG | domain kubernetes-upgrade-907863 has defined MAC address 52:54:00:f6:6f:99 in network mk-kubernetes-upgrade-907863
	I0806 00:14:45.070424   61720 main.go:141] libmachine: (kubernetes-upgrade-907863) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f6:6f:99", ip: ""} in network mk-kubernetes-upgrade-907863: {Iface:virbr4 ExpiryTime:2024-08-06 01:14:38 +0000 UTC Type:0 Mac:52:54:00:f6:6f:99 Iaid: IPaddr:192.168.72.112 Prefix:24 Hostname:kubernetes-upgrade-907863 Clientid:01:52:54:00:f6:6f:99}
	I0806 00:14:45.070453   61720 main.go:141] libmachine: (kubernetes-upgrade-907863) DBG | domain kubernetes-upgrade-907863 has defined IP address 192.168.72.112 and MAC address 52:54:00:f6:6f:99 in network mk-kubernetes-upgrade-907863
	I0806 00:14:45.070630   61720 main.go:141] libmachine: (kubernetes-upgrade-907863) Calling .GetSSHPort
	I0806 00:14:45.070807   61720 main.go:141] libmachine: (kubernetes-upgrade-907863) Calling .GetSSHKeyPath
	I0806 00:14:45.071003   61720 main.go:141] libmachine: (kubernetes-upgrade-907863) Calling .GetSSHKeyPath
	I0806 00:14:45.071149   61720 main.go:141] libmachine: (kubernetes-upgrade-907863) Calling .GetSSHUsername
	I0806 00:14:45.071287   61720 main.go:141] libmachine: Using SSH client type: native
	I0806 00:14:45.071475   61720 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.72.112 22 <nil> <nil>}
	I0806 00:14:45.071492   61720 main.go:141] libmachine: About to run SSH command:
	exit 0
	I0806 00:14:45.178702   61720 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0806 00:14:45.178751   61720 main.go:141] libmachine: Detecting the provisioner...
	I0806 00:14:45.178767   61720 main.go:141] libmachine: (kubernetes-upgrade-907863) Calling .GetSSHHostname
	I0806 00:14:45.182067   61720 main.go:141] libmachine: (kubernetes-upgrade-907863) DBG | domain kubernetes-upgrade-907863 has defined MAC address 52:54:00:f6:6f:99 in network mk-kubernetes-upgrade-907863
	I0806 00:14:45.182470   61720 main.go:141] libmachine: (kubernetes-upgrade-907863) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f6:6f:99", ip: ""} in network mk-kubernetes-upgrade-907863: {Iface:virbr4 ExpiryTime:2024-08-06 01:14:38 +0000 UTC Type:0 Mac:52:54:00:f6:6f:99 Iaid: IPaddr:192.168.72.112 Prefix:24 Hostname:kubernetes-upgrade-907863 Clientid:01:52:54:00:f6:6f:99}
	I0806 00:14:45.182517   61720 main.go:141] libmachine: (kubernetes-upgrade-907863) DBG | domain kubernetes-upgrade-907863 has defined IP address 192.168.72.112 and MAC address 52:54:00:f6:6f:99 in network mk-kubernetes-upgrade-907863
	I0806 00:14:45.182630   61720 main.go:141] libmachine: (kubernetes-upgrade-907863) Calling .GetSSHPort
	I0806 00:14:45.182863   61720 main.go:141] libmachine: (kubernetes-upgrade-907863) Calling .GetSSHKeyPath
	I0806 00:14:45.183077   61720 main.go:141] libmachine: (kubernetes-upgrade-907863) Calling .GetSSHKeyPath
	I0806 00:14:45.183250   61720 main.go:141] libmachine: (kubernetes-upgrade-907863) Calling .GetSSHUsername
	I0806 00:14:45.183416   61720 main.go:141] libmachine: Using SSH client type: native
	I0806 00:14:45.183625   61720 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.72.112 22 <nil> <nil>}
	I0806 00:14:45.183636   61720 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I0806 00:14:45.283887   61720 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2023.02.9-dirty
	ID=buildroot
	VERSION_ID=2023.02.9
	PRETTY_NAME="Buildroot 2023.02.9"
	
	I0806 00:14:45.283956   61720 main.go:141] libmachine: found compatible host: buildroot
	I0806 00:14:45.283966   61720 main.go:141] libmachine: Provisioning with buildroot...
	I0806 00:14:45.283978   61720 main.go:141] libmachine: (kubernetes-upgrade-907863) Calling .GetMachineName
	I0806 00:14:45.284240   61720 buildroot.go:166] provisioning hostname "kubernetes-upgrade-907863"
	I0806 00:14:45.284270   61720 main.go:141] libmachine: (kubernetes-upgrade-907863) Calling .GetMachineName
	I0806 00:14:45.284472   61720 main.go:141] libmachine: (kubernetes-upgrade-907863) Calling .GetSSHHostname
	I0806 00:14:45.287574   61720 main.go:141] libmachine: (kubernetes-upgrade-907863) DBG | domain kubernetes-upgrade-907863 has defined MAC address 52:54:00:f6:6f:99 in network mk-kubernetes-upgrade-907863
	I0806 00:14:45.287912   61720 main.go:141] libmachine: (kubernetes-upgrade-907863) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f6:6f:99", ip: ""} in network mk-kubernetes-upgrade-907863: {Iface:virbr4 ExpiryTime:2024-08-06 01:14:38 +0000 UTC Type:0 Mac:52:54:00:f6:6f:99 Iaid: IPaddr:192.168.72.112 Prefix:24 Hostname:kubernetes-upgrade-907863 Clientid:01:52:54:00:f6:6f:99}
	I0806 00:14:45.287955   61720 main.go:141] libmachine: (kubernetes-upgrade-907863) DBG | domain kubernetes-upgrade-907863 has defined IP address 192.168.72.112 and MAC address 52:54:00:f6:6f:99 in network mk-kubernetes-upgrade-907863
	I0806 00:14:45.288147   61720 main.go:141] libmachine: (kubernetes-upgrade-907863) Calling .GetSSHPort
	I0806 00:14:45.288338   61720 main.go:141] libmachine: (kubernetes-upgrade-907863) Calling .GetSSHKeyPath
	I0806 00:14:45.288509   61720 main.go:141] libmachine: (kubernetes-upgrade-907863) Calling .GetSSHKeyPath
	I0806 00:14:45.288713   61720 main.go:141] libmachine: (kubernetes-upgrade-907863) Calling .GetSSHUsername
	I0806 00:14:45.288922   61720 main.go:141] libmachine: Using SSH client type: native
	I0806 00:14:45.289156   61720 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.72.112 22 <nil> <nil>}
	I0806 00:14:45.289167   61720 main.go:141] libmachine: About to run SSH command:
	sudo hostname kubernetes-upgrade-907863 && echo "kubernetes-upgrade-907863" | sudo tee /etc/hostname
	I0806 00:14:45.413866   61720 main.go:141] libmachine: SSH cmd err, output: <nil>: kubernetes-upgrade-907863
	
	I0806 00:14:45.413901   61720 main.go:141] libmachine: (kubernetes-upgrade-907863) Calling .GetSSHHostname
	I0806 00:14:45.417554   61720 main.go:141] libmachine: (kubernetes-upgrade-907863) DBG | domain kubernetes-upgrade-907863 has defined MAC address 52:54:00:f6:6f:99 in network mk-kubernetes-upgrade-907863
	I0806 00:14:45.418009   61720 main.go:141] libmachine: (kubernetes-upgrade-907863) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f6:6f:99", ip: ""} in network mk-kubernetes-upgrade-907863: {Iface:virbr4 ExpiryTime:2024-08-06 01:14:38 +0000 UTC Type:0 Mac:52:54:00:f6:6f:99 Iaid: IPaddr:192.168.72.112 Prefix:24 Hostname:kubernetes-upgrade-907863 Clientid:01:52:54:00:f6:6f:99}
	I0806 00:14:45.418043   61720 main.go:141] libmachine: (kubernetes-upgrade-907863) DBG | domain kubernetes-upgrade-907863 has defined IP address 192.168.72.112 and MAC address 52:54:00:f6:6f:99 in network mk-kubernetes-upgrade-907863
	I0806 00:14:45.418153   61720 main.go:141] libmachine: (kubernetes-upgrade-907863) Calling .GetSSHPort
	I0806 00:14:45.418331   61720 main.go:141] libmachine: (kubernetes-upgrade-907863) Calling .GetSSHKeyPath
	I0806 00:14:45.418573   61720 main.go:141] libmachine: (kubernetes-upgrade-907863) Calling .GetSSHKeyPath
	I0806 00:14:45.418717   61720 main.go:141] libmachine: (kubernetes-upgrade-907863) Calling .GetSSHUsername
	I0806 00:14:45.418894   61720 main.go:141] libmachine: Using SSH client type: native
	I0806 00:14:45.419083   61720 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.72.112 22 <nil> <nil>}
	I0806 00:14:45.419103   61720 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\skubernetes-upgrade-907863' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 kubernetes-upgrade-907863/g' /etc/hosts;
				else 
					echo '127.0.1.1 kubernetes-upgrade-907863' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0806 00:14:45.530368   61720 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0806 00:14:45.530403   61720 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19373-9606/.minikube CaCertPath:/home/jenkins/minikube-integration/19373-9606/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19373-9606/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19373-9606/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19373-9606/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19373-9606/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19373-9606/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19373-9606/.minikube}
	I0806 00:14:45.530459   61720 buildroot.go:174] setting up certificates
	I0806 00:14:45.530478   61720 provision.go:84] configureAuth start
	I0806 00:14:45.530497   61720 main.go:141] libmachine: (kubernetes-upgrade-907863) Calling .GetMachineName
	I0806 00:14:45.530793   61720 main.go:141] libmachine: (kubernetes-upgrade-907863) Calling .GetIP
	I0806 00:14:45.533849   61720 main.go:141] libmachine: (kubernetes-upgrade-907863) DBG | domain kubernetes-upgrade-907863 has defined MAC address 52:54:00:f6:6f:99 in network mk-kubernetes-upgrade-907863
	I0806 00:14:45.534237   61720 main.go:141] libmachine: (kubernetes-upgrade-907863) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f6:6f:99", ip: ""} in network mk-kubernetes-upgrade-907863: {Iface:virbr4 ExpiryTime:2024-08-06 01:14:38 +0000 UTC Type:0 Mac:52:54:00:f6:6f:99 Iaid: IPaddr:192.168.72.112 Prefix:24 Hostname:kubernetes-upgrade-907863 Clientid:01:52:54:00:f6:6f:99}
	I0806 00:14:45.534262   61720 main.go:141] libmachine: (kubernetes-upgrade-907863) DBG | domain kubernetes-upgrade-907863 has defined IP address 192.168.72.112 and MAC address 52:54:00:f6:6f:99 in network mk-kubernetes-upgrade-907863
	I0806 00:14:45.534404   61720 main.go:141] libmachine: (kubernetes-upgrade-907863) Calling .GetSSHHostname
	I0806 00:14:45.536544   61720 main.go:141] libmachine: (kubernetes-upgrade-907863) DBG | domain kubernetes-upgrade-907863 has defined MAC address 52:54:00:f6:6f:99 in network mk-kubernetes-upgrade-907863
	I0806 00:14:45.536851   61720 main.go:141] libmachine: (kubernetes-upgrade-907863) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f6:6f:99", ip: ""} in network mk-kubernetes-upgrade-907863: {Iface:virbr4 ExpiryTime:2024-08-06 01:14:38 +0000 UTC Type:0 Mac:52:54:00:f6:6f:99 Iaid: IPaddr:192.168.72.112 Prefix:24 Hostname:kubernetes-upgrade-907863 Clientid:01:52:54:00:f6:6f:99}
	I0806 00:14:45.536890   61720 main.go:141] libmachine: (kubernetes-upgrade-907863) DBG | domain kubernetes-upgrade-907863 has defined IP address 192.168.72.112 and MAC address 52:54:00:f6:6f:99 in network mk-kubernetes-upgrade-907863
	I0806 00:14:45.537001   61720 provision.go:143] copyHostCerts
	I0806 00:14:45.537066   61720 exec_runner.go:144] found /home/jenkins/minikube-integration/19373-9606/.minikube/key.pem, removing ...
	I0806 00:14:45.537083   61720 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19373-9606/.minikube/key.pem
	I0806 00:14:45.537142   61720 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19373-9606/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19373-9606/.minikube/key.pem (1679 bytes)
	I0806 00:14:45.537260   61720 exec_runner.go:144] found /home/jenkins/minikube-integration/19373-9606/.minikube/ca.pem, removing ...
	I0806 00:14:45.537272   61720 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19373-9606/.minikube/ca.pem
	I0806 00:14:45.537309   61720 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19373-9606/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19373-9606/.minikube/ca.pem (1082 bytes)
	I0806 00:14:45.537395   61720 exec_runner.go:144] found /home/jenkins/minikube-integration/19373-9606/.minikube/cert.pem, removing ...
	I0806 00:14:45.537405   61720 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19373-9606/.minikube/cert.pem
	I0806 00:14:45.537432   61720 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19373-9606/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19373-9606/.minikube/cert.pem (1123 bytes)
	I0806 00:14:45.537496   61720 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19373-9606/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19373-9606/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19373-9606/.minikube/certs/ca-key.pem org=jenkins.kubernetes-upgrade-907863 san=[127.0.0.1 192.168.72.112 kubernetes-upgrade-907863 localhost minikube]
	I0806 00:14:45.648251   61720 provision.go:177] copyRemoteCerts
	I0806 00:14:45.648303   61720 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0806 00:14:45.648333   61720 main.go:141] libmachine: (kubernetes-upgrade-907863) Calling .GetSSHHostname
	I0806 00:14:45.650992   61720 main.go:141] libmachine: (kubernetes-upgrade-907863) DBG | domain kubernetes-upgrade-907863 has defined MAC address 52:54:00:f6:6f:99 in network mk-kubernetes-upgrade-907863
	I0806 00:14:45.651510   61720 main.go:141] libmachine: (kubernetes-upgrade-907863) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f6:6f:99", ip: ""} in network mk-kubernetes-upgrade-907863: {Iface:virbr4 ExpiryTime:2024-08-06 01:14:38 +0000 UTC Type:0 Mac:52:54:00:f6:6f:99 Iaid: IPaddr:192.168.72.112 Prefix:24 Hostname:kubernetes-upgrade-907863 Clientid:01:52:54:00:f6:6f:99}
	I0806 00:14:45.651534   61720 main.go:141] libmachine: (kubernetes-upgrade-907863) DBG | domain kubernetes-upgrade-907863 has defined IP address 192.168.72.112 and MAC address 52:54:00:f6:6f:99 in network mk-kubernetes-upgrade-907863
	I0806 00:14:45.651720   61720 main.go:141] libmachine: (kubernetes-upgrade-907863) Calling .GetSSHPort
	I0806 00:14:45.651912   61720 main.go:141] libmachine: (kubernetes-upgrade-907863) Calling .GetSSHKeyPath
	I0806 00:14:45.652105   61720 main.go:141] libmachine: (kubernetes-upgrade-907863) Calling .GetSSHUsername
	I0806 00:14:45.652257   61720 sshutil.go:53] new ssh client: &{IP:192.168.72.112 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19373-9606/.minikube/machines/kubernetes-upgrade-907863/id_rsa Username:docker}
	I0806 00:14:45.733623   61720 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19373-9606/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0806 00:14:45.759216   61720 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19373-9606/.minikube/machines/server.pem --> /etc/docker/server.pem (1241 bytes)
	I0806 00:14:45.785907   61720 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19373-9606/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0806 00:14:45.812282   61720 provision.go:87] duration metric: took 281.788709ms to configureAuth
	I0806 00:14:45.812310   61720 buildroot.go:189] setting minikube options for container-runtime
	I0806 00:14:45.812466   61720 config.go:182] Loaded profile config "kubernetes-upgrade-907863": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.20.0
	I0806 00:14:45.812527   61720 main.go:141] libmachine: (kubernetes-upgrade-907863) Calling .GetSSHHostname
	I0806 00:14:45.815951   61720 main.go:141] libmachine: (kubernetes-upgrade-907863) DBG | domain kubernetes-upgrade-907863 has defined MAC address 52:54:00:f6:6f:99 in network mk-kubernetes-upgrade-907863
	I0806 00:14:45.816375   61720 main.go:141] libmachine: (kubernetes-upgrade-907863) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f6:6f:99", ip: ""} in network mk-kubernetes-upgrade-907863: {Iface:virbr4 ExpiryTime:2024-08-06 01:14:38 +0000 UTC Type:0 Mac:52:54:00:f6:6f:99 Iaid: IPaddr:192.168.72.112 Prefix:24 Hostname:kubernetes-upgrade-907863 Clientid:01:52:54:00:f6:6f:99}
	I0806 00:14:45.816401   61720 main.go:141] libmachine: (kubernetes-upgrade-907863) DBG | domain kubernetes-upgrade-907863 has defined IP address 192.168.72.112 and MAC address 52:54:00:f6:6f:99 in network mk-kubernetes-upgrade-907863
	I0806 00:14:45.816598   61720 main.go:141] libmachine: (kubernetes-upgrade-907863) Calling .GetSSHPort
	I0806 00:14:45.816826   61720 main.go:141] libmachine: (kubernetes-upgrade-907863) Calling .GetSSHKeyPath
	I0806 00:14:45.816995   61720 main.go:141] libmachine: (kubernetes-upgrade-907863) Calling .GetSSHKeyPath
	I0806 00:14:45.817171   61720 main.go:141] libmachine: (kubernetes-upgrade-907863) Calling .GetSSHUsername
	I0806 00:14:45.817360   61720 main.go:141] libmachine: Using SSH client type: native
	I0806 00:14:45.817605   61720 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.72.112 22 <nil> <nil>}
	I0806 00:14:45.817633   61720 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0806 00:14:46.096742   61720 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0806 00:14:46.096779   61720 main.go:141] libmachine: Checking connection to Docker...
	I0806 00:14:46.096793   61720 main.go:141] libmachine: (kubernetes-upgrade-907863) Calling .GetURL
	I0806 00:14:46.098348   61720 main.go:141] libmachine: (kubernetes-upgrade-907863) DBG | Using libvirt version 6000000
	I0806 00:14:46.100964   61720 main.go:141] libmachine: (kubernetes-upgrade-907863) DBG | domain kubernetes-upgrade-907863 has defined MAC address 52:54:00:f6:6f:99 in network mk-kubernetes-upgrade-907863
	I0806 00:14:46.101255   61720 main.go:141] libmachine: (kubernetes-upgrade-907863) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f6:6f:99", ip: ""} in network mk-kubernetes-upgrade-907863: {Iface:virbr4 ExpiryTime:2024-08-06 01:14:38 +0000 UTC Type:0 Mac:52:54:00:f6:6f:99 Iaid: IPaddr:192.168.72.112 Prefix:24 Hostname:kubernetes-upgrade-907863 Clientid:01:52:54:00:f6:6f:99}
	I0806 00:14:46.101277   61720 main.go:141] libmachine: (kubernetes-upgrade-907863) DBG | domain kubernetes-upgrade-907863 has defined IP address 192.168.72.112 and MAC address 52:54:00:f6:6f:99 in network mk-kubernetes-upgrade-907863
	I0806 00:14:46.101439   61720 main.go:141] libmachine: Docker is up and running!
	I0806 00:14:46.101449   61720 main.go:141] libmachine: Reticulating splines...
	I0806 00:14:46.101457   61720 client.go:171] duration metric: took 23.613079714s to LocalClient.Create
	I0806 00:14:46.101483   61720 start.go:167] duration metric: took 23.613147049s to libmachine.API.Create "kubernetes-upgrade-907863"
	I0806 00:14:46.101494   61720 start.go:293] postStartSetup for "kubernetes-upgrade-907863" (driver="kvm2")
	I0806 00:14:46.101508   61720 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0806 00:14:46.101531   61720 main.go:141] libmachine: (kubernetes-upgrade-907863) Calling .DriverName
	I0806 00:14:46.101781   61720 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0806 00:14:46.101829   61720 main.go:141] libmachine: (kubernetes-upgrade-907863) Calling .GetSSHHostname
	I0806 00:14:46.104347   61720 main.go:141] libmachine: (kubernetes-upgrade-907863) DBG | domain kubernetes-upgrade-907863 has defined MAC address 52:54:00:f6:6f:99 in network mk-kubernetes-upgrade-907863
	I0806 00:14:46.104786   61720 main.go:141] libmachine: (kubernetes-upgrade-907863) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f6:6f:99", ip: ""} in network mk-kubernetes-upgrade-907863: {Iface:virbr4 ExpiryTime:2024-08-06 01:14:38 +0000 UTC Type:0 Mac:52:54:00:f6:6f:99 Iaid: IPaddr:192.168.72.112 Prefix:24 Hostname:kubernetes-upgrade-907863 Clientid:01:52:54:00:f6:6f:99}
	I0806 00:14:46.104813   61720 main.go:141] libmachine: (kubernetes-upgrade-907863) DBG | domain kubernetes-upgrade-907863 has defined IP address 192.168.72.112 and MAC address 52:54:00:f6:6f:99 in network mk-kubernetes-upgrade-907863
	I0806 00:14:46.105081   61720 main.go:141] libmachine: (kubernetes-upgrade-907863) Calling .GetSSHPort
	I0806 00:14:46.105257   61720 main.go:141] libmachine: (kubernetes-upgrade-907863) Calling .GetSSHKeyPath
	I0806 00:14:46.105445   61720 main.go:141] libmachine: (kubernetes-upgrade-907863) Calling .GetSSHUsername
	I0806 00:14:46.105604   61720 sshutil.go:53] new ssh client: &{IP:192.168.72.112 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19373-9606/.minikube/machines/kubernetes-upgrade-907863/id_rsa Username:docker}
	I0806 00:14:46.188914   61720 ssh_runner.go:195] Run: cat /etc/os-release
	I0806 00:14:46.193808   61720 info.go:137] Remote host: Buildroot 2023.02.9
	I0806 00:14:46.193837   61720 filesync.go:126] Scanning /home/jenkins/minikube-integration/19373-9606/.minikube/addons for local assets ...
	I0806 00:14:46.193939   61720 filesync.go:126] Scanning /home/jenkins/minikube-integration/19373-9606/.minikube/files for local assets ...
	I0806 00:14:46.194050   61720 filesync.go:149] local asset: /home/jenkins/minikube-integration/19373-9606/.minikube/files/etc/ssl/certs/167922.pem -> 167922.pem in /etc/ssl/certs
	I0806 00:14:46.194181   61720 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0806 00:14:46.208786   61720 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19373-9606/.minikube/files/etc/ssl/certs/167922.pem --> /etc/ssl/certs/167922.pem (1708 bytes)
	I0806 00:14:46.234274   61720 start.go:296] duration metric: took 132.765664ms for postStartSetup
	I0806 00:14:46.234326   61720 main.go:141] libmachine: (kubernetes-upgrade-907863) Calling .GetConfigRaw
	I0806 00:14:46.234938   61720 main.go:141] libmachine: (kubernetes-upgrade-907863) Calling .GetIP
	I0806 00:14:46.237911   61720 main.go:141] libmachine: (kubernetes-upgrade-907863) DBG | domain kubernetes-upgrade-907863 has defined MAC address 52:54:00:f6:6f:99 in network mk-kubernetes-upgrade-907863
	I0806 00:14:46.238167   61720 main.go:141] libmachine: (kubernetes-upgrade-907863) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f6:6f:99", ip: ""} in network mk-kubernetes-upgrade-907863: {Iface:virbr4 ExpiryTime:2024-08-06 01:14:38 +0000 UTC Type:0 Mac:52:54:00:f6:6f:99 Iaid: IPaddr:192.168.72.112 Prefix:24 Hostname:kubernetes-upgrade-907863 Clientid:01:52:54:00:f6:6f:99}
	I0806 00:14:46.238204   61720 main.go:141] libmachine: (kubernetes-upgrade-907863) DBG | domain kubernetes-upgrade-907863 has defined IP address 192.168.72.112 and MAC address 52:54:00:f6:6f:99 in network mk-kubernetes-upgrade-907863
	I0806 00:14:46.238390   61720 profile.go:143] Saving config to /home/jenkins/minikube-integration/19373-9606/.minikube/profiles/kubernetes-upgrade-907863/config.json ...
	I0806 00:14:46.238584   61720 start.go:128] duration metric: took 23.774163741s to createHost
	I0806 00:14:46.238611   61720 main.go:141] libmachine: (kubernetes-upgrade-907863) Calling .GetSSHHostname
	I0806 00:14:46.240741   61720 main.go:141] libmachine: (kubernetes-upgrade-907863) DBG | domain kubernetes-upgrade-907863 has defined MAC address 52:54:00:f6:6f:99 in network mk-kubernetes-upgrade-907863
	I0806 00:14:46.241026   61720 main.go:141] libmachine: (kubernetes-upgrade-907863) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f6:6f:99", ip: ""} in network mk-kubernetes-upgrade-907863: {Iface:virbr4 ExpiryTime:2024-08-06 01:14:38 +0000 UTC Type:0 Mac:52:54:00:f6:6f:99 Iaid: IPaddr:192.168.72.112 Prefix:24 Hostname:kubernetes-upgrade-907863 Clientid:01:52:54:00:f6:6f:99}
	I0806 00:14:46.241051   61720 main.go:141] libmachine: (kubernetes-upgrade-907863) DBG | domain kubernetes-upgrade-907863 has defined IP address 192.168.72.112 and MAC address 52:54:00:f6:6f:99 in network mk-kubernetes-upgrade-907863
	I0806 00:14:46.241251   61720 main.go:141] libmachine: (kubernetes-upgrade-907863) Calling .GetSSHPort
	I0806 00:14:46.241413   61720 main.go:141] libmachine: (kubernetes-upgrade-907863) Calling .GetSSHKeyPath
	I0806 00:14:46.241580   61720 main.go:141] libmachine: (kubernetes-upgrade-907863) Calling .GetSSHKeyPath
	I0806 00:14:46.241731   61720 main.go:141] libmachine: (kubernetes-upgrade-907863) Calling .GetSSHUsername
	I0806 00:14:46.241879   61720 main.go:141] libmachine: Using SSH client type: native
	I0806 00:14:46.242047   61720 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.72.112 22 <nil> <nil>}
	I0806 00:14:46.242056   61720 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0806 00:14:46.343871   61720 main.go:141] libmachine: SSH cmd err, output: <nil>: 1722903286.327466870
	
	I0806 00:14:46.343890   61720 fix.go:216] guest clock: 1722903286.327466870
	I0806 00:14:46.343897   61720 fix.go:229] Guest: 2024-08-06 00:14:46.32746687 +0000 UTC Remote: 2024-08-06 00:14:46.238596191 +0000 UTC m=+34.462085673 (delta=88.870679ms)
	I0806 00:14:46.343917   61720 fix.go:200] guest clock delta is within tolerance: 88.870679ms
	I0806 00:14:46.343921   61720 start.go:83] releasing machines lock for "kubernetes-upgrade-907863", held for 23.87968491s
	I0806 00:14:46.343942   61720 main.go:141] libmachine: (kubernetes-upgrade-907863) Calling .DriverName
	I0806 00:14:46.344223   61720 main.go:141] libmachine: (kubernetes-upgrade-907863) Calling .GetIP
	I0806 00:14:46.346864   61720 main.go:141] libmachine: (kubernetes-upgrade-907863) DBG | domain kubernetes-upgrade-907863 has defined MAC address 52:54:00:f6:6f:99 in network mk-kubernetes-upgrade-907863
	I0806 00:14:46.347365   61720 main.go:141] libmachine: (kubernetes-upgrade-907863) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f6:6f:99", ip: ""} in network mk-kubernetes-upgrade-907863: {Iface:virbr4 ExpiryTime:2024-08-06 01:14:38 +0000 UTC Type:0 Mac:52:54:00:f6:6f:99 Iaid: IPaddr:192.168.72.112 Prefix:24 Hostname:kubernetes-upgrade-907863 Clientid:01:52:54:00:f6:6f:99}
	I0806 00:14:46.347401   61720 main.go:141] libmachine: (kubernetes-upgrade-907863) DBG | domain kubernetes-upgrade-907863 has defined IP address 192.168.72.112 and MAC address 52:54:00:f6:6f:99 in network mk-kubernetes-upgrade-907863
	I0806 00:14:46.347607   61720 main.go:141] libmachine: (kubernetes-upgrade-907863) Calling .DriverName
	I0806 00:14:46.348173   61720 main.go:141] libmachine: (kubernetes-upgrade-907863) Calling .DriverName
	I0806 00:14:46.348392   61720 main.go:141] libmachine: (kubernetes-upgrade-907863) Calling .DriverName
	I0806 00:14:46.348501   61720 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0806 00:14:46.348545   61720 main.go:141] libmachine: (kubernetes-upgrade-907863) Calling .GetSSHHostname
	I0806 00:14:46.348623   61720 ssh_runner.go:195] Run: cat /version.json
	I0806 00:14:46.348646   61720 main.go:141] libmachine: (kubernetes-upgrade-907863) Calling .GetSSHHostname
	I0806 00:14:46.351386   61720 main.go:141] libmachine: (kubernetes-upgrade-907863) DBG | domain kubernetes-upgrade-907863 has defined MAC address 52:54:00:f6:6f:99 in network mk-kubernetes-upgrade-907863
	I0806 00:14:46.351533   61720 main.go:141] libmachine: (kubernetes-upgrade-907863) DBG | domain kubernetes-upgrade-907863 has defined MAC address 52:54:00:f6:6f:99 in network mk-kubernetes-upgrade-907863
	I0806 00:14:46.351746   61720 main.go:141] libmachine: (kubernetes-upgrade-907863) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f6:6f:99", ip: ""} in network mk-kubernetes-upgrade-907863: {Iface:virbr4 ExpiryTime:2024-08-06 01:14:38 +0000 UTC Type:0 Mac:52:54:00:f6:6f:99 Iaid: IPaddr:192.168.72.112 Prefix:24 Hostname:kubernetes-upgrade-907863 Clientid:01:52:54:00:f6:6f:99}
	I0806 00:14:46.351771   61720 main.go:141] libmachine: (kubernetes-upgrade-907863) DBG | domain kubernetes-upgrade-907863 has defined IP address 192.168.72.112 and MAC address 52:54:00:f6:6f:99 in network mk-kubernetes-upgrade-907863
	I0806 00:14:46.351895   61720 main.go:141] libmachine: (kubernetes-upgrade-907863) Calling .GetSSHPort
	I0806 00:14:46.352002   61720 main.go:141] libmachine: (kubernetes-upgrade-907863) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f6:6f:99", ip: ""} in network mk-kubernetes-upgrade-907863: {Iface:virbr4 ExpiryTime:2024-08-06 01:14:38 +0000 UTC Type:0 Mac:52:54:00:f6:6f:99 Iaid: IPaddr:192.168.72.112 Prefix:24 Hostname:kubernetes-upgrade-907863 Clientid:01:52:54:00:f6:6f:99}
	I0806 00:14:46.352025   61720 main.go:141] libmachine: (kubernetes-upgrade-907863) DBG | domain kubernetes-upgrade-907863 has defined IP address 192.168.72.112 and MAC address 52:54:00:f6:6f:99 in network mk-kubernetes-upgrade-907863
	I0806 00:14:46.352047   61720 main.go:141] libmachine: (kubernetes-upgrade-907863) Calling .GetSSHKeyPath
	I0806 00:14:46.352224   61720 main.go:141] libmachine: (kubernetes-upgrade-907863) Calling .GetSSHPort
	I0806 00:14:46.352225   61720 main.go:141] libmachine: (kubernetes-upgrade-907863) Calling .GetSSHUsername
	I0806 00:14:46.352398   61720 main.go:141] libmachine: (kubernetes-upgrade-907863) Calling .GetSSHKeyPath
	I0806 00:14:46.352410   61720 sshutil.go:53] new ssh client: &{IP:192.168.72.112 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19373-9606/.minikube/machines/kubernetes-upgrade-907863/id_rsa Username:docker}
	I0806 00:14:46.352531   61720 main.go:141] libmachine: (kubernetes-upgrade-907863) Calling .GetSSHUsername
	I0806 00:14:46.352676   61720 sshutil.go:53] new ssh client: &{IP:192.168.72.112 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19373-9606/.minikube/machines/kubernetes-upgrade-907863/id_rsa Username:docker}
	I0806 00:14:46.432018   61720 ssh_runner.go:195] Run: systemctl --version
	I0806 00:14:46.454693   61720 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0806 00:14:46.628607   61720 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0806 00:14:46.638675   61720 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0806 00:14:46.638749   61720 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0806 00:14:46.659007   61720 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0806 00:14:46.659034   61720 start.go:495] detecting cgroup driver to use...
	I0806 00:14:46.659142   61720 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0806 00:14:46.680151   61720 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0806 00:14:46.698336   61720 docker.go:217] disabling cri-docker service (if available) ...
	I0806 00:14:46.698504   61720 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0806 00:14:46.715093   61720 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0806 00:14:46.730157   61720 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0806 00:14:46.849640   61720 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0806 00:14:47.007622   61720 docker.go:233] disabling docker service ...
	I0806 00:14:47.007694   61720 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0806 00:14:47.022913   61720 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0806 00:14:47.037788   61720 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0806 00:14:47.172160   61720 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0806 00:14:47.297771   61720 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0806 00:14:47.315774   61720 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0806 00:14:47.335893   61720 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.2" pause image...
	I0806 00:14:47.335977   61720 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.2"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0806 00:14:47.350348   61720 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0806 00:14:47.350417   61720 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0806 00:14:47.362187   61720 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0806 00:14:47.375760   61720 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0806 00:14:47.388776   61720 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0806 00:14:47.401761   61720 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0806 00:14:47.412720   61720 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0806 00:14:47.412787   61720 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0806 00:14:47.428644   61720 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0806 00:14:47.440189   61720 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0806 00:14:47.553614   61720 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0806 00:14:47.698481   61720 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0806 00:14:47.698569   61720 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0806 00:14:47.703748   61720 start.go:563] Will wait 60s for crictl version
	I0806 00:14:47.703812   61720 ssh_runner.go:195] Run: which crictl
	I0806 00:14:47.708040   61720 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0806 00:14:47.749798   61720 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0806 00:14:47.749884   61720 ssh_runner.go:195] Run: crio --version
	I0806 00:14:47.779166   61720 ssh_runner.go:195] Run: crio --version
	I0806 00:14:47.812309   61720 out.go:177] * Preparing Kubernetes v1.20.0 on CRI-O 1.29.1 ...
	I0806 00:14:46.745487   62278 preload.go:132] Checking if preload exists for k8s version v1.24.1 and runtime crio
	I0806 00:14:46.745523   62278 preload.go:148] Found local preload: /home/jenkins/minikube-integration/19373-9606/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.24.1-cri-o-overlay-amd64.tar.lz4
	I0806 00:14:46.745529   62278 cache.go:57] Caching tarball of preloaded images
	I0806 00:14:46.745656   62278 preload.go:174] Found /home/jenkins/minikube-integration/19373-9606/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.24.1-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0806 00:14:46.745670   62278 cache.go:60] Finished verifying existence of preloaded tar for  v1.24.1 on crio
	I0806 00:14:46.745775   62278 profile.go:148] Saving config to /home/jenkins/minikube-integration/19373-9606/.minikube/profiles/stopped-upgrade-936666/config.json ...
	I0806 00:14:46.745792   62278 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19373-9606/.minikube/profiles/stopped-upgrade-936666/config.json: {Name:mk6c297a1f267f679d468f1e18f9a6917b08cdfc Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0806 00:14:46.745948   62278 cache.go:208] Successfully downloaded all kic artifacts
	I0806 00:14:46.745994   62278 start.go:352] acquiring machines lock for stopped-upgrade-936666: {Name:mkd2ba511c39504598222edbf83078b718329186 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0806 00:14:46.982836   62044 provision.go:177] copyRemoteCerts
	I0806 00:14:46.982898   62044 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0806 00:14:46.982922   62044 main.go:141] libmachine: (pause-161508) Calling .GetSSHHostname
	I0806 00:14:46.985958   62044 main.go:141] libmachine: (pause-161508) DBG | domain pause-161508 has defined MAC address 52:54:00:33:8e:29 in network mk-pause-161508
	I0806 00:14:46.986362   62044 main.go:141] libmachine: (pause-161508) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:33:8e:29", ip: ""} in network mk-pause-161508: {Iface:virbr3 ExpiryTime:2024-08-06 01:13:19 +0000 UTC Type:0 Mac:52:54:00:33:8e:29 Iaid: IPaddr:192.168.39.118 Prefix:24 Hostname:pause-161508 Clientid:01:52:54:00:33:8e:29}
	I0806 00:14:46.986394   62044 main.go:141] libmachine: (pause-161508) DBG | domain pause-161508 has defined IP address 192.168.39.118 and MAC address 52:54:00:33:8e:29 in network mk-pause-161508
	I0806 00:14:46.986557   62044 main.go:141] libmachine: (pause-161508) Calling .GetSSHPort
	I0806 00:14:46.986805   62044 main.go:141] libmachine: (pause-161508) Calling .GetSSHKeyPath
	I0806 00:14:46.986991   62044 main.go:141] libmachine: (pause-161508) Calling .GetSSHUsername
	I0806 00:14:46.987143   62044 sshutil.go:53] new ssh client: &{IP:192.168.39.118 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19373-9606/.minikube/machines/pause-161508/id_rsa Username:docker}
	I0806 00:14:47.070447   62044 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19373-9606/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0806 00:14:47.109142   62044 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19373-9606/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I0806 00:14:47.137301   62044 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19373-9606/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0806 00:14:47.164786   62044 provision.go:87] duration metric: took 412.109539ms to configureAuth
	I0806 00:14:47.164817   62044 buildroot.go:189] setting minikube options for container-runtime
	I0806 00:14:47.165068   62044 config.go:182] Loaded profile config "pause-161508": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0806 00:14:47.165146   62044 main.go:141] libmachine: (pause-161508) Calling .GetSSHHostname
	I0806 00:14:47.168216   62044 main.go:141] libmachine: (pause-161508) DBG | domain pause-161508 has defined MAC address 52:54:00:33:8e:29 in network mk-pause-161508
	I0806 00:14:47.168569   62044 main.go:141] libmachine: (pause-161508) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:33:8e:29", ip: ""} in network mk-pause-161508: {Iface:virbr3 ExpiryTime:2024-08-06 01:13:19 +0000 UTC Type:0 Mac:52:54:00:33:8e:29 Iaid: IPaddr:192.168.39.118 Prefix:24 Hostname:pause-161508 Clientid:01:52:54:00:33:8e:29}
	I0806 00:14:47.168631   62044 main.go:141] libmachine: (pause-161508) DBG | domain pause-161508 has defined IP address 192.168.39.118 and MAC address 52:54:00:33:8e:29 in network mk-pause-161508
	I0806 00:14:47.168795   62044 main.go:141] libmachine: (pause-161508) Calling .GetSSHPort
	I0806 00:14:47.169007   62044 main.go:141] libmachine: (pause-161508) Calling .GetSSHKeyPath
	I0806 00:14:47.169210   62044 main.go:141] libmachine: (pause-161508) Calling .GetSSHKeyPath
	I0806 00:14:47.169368   62044 main.go:141] libmachine: (pause-161508) Calling .GetSSHUsername
	I0806 00:14:47.169555   62044 main.go:141] libmachine: Using SSH client type: native
	I0806 00:14:47.169746   62044 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.118 22 <nil> <nil>}
	I0806 00:14:47.169767   62044 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0806 00:14:47.813708   61720 main.go:141] libmachine: (kubernetes-upgrade-907863) Calling .GetIP
	I0806 00:14:47.816108   61720 main.go:141] libmachine: (kubernetes-upgrade-907863) DBG | domain kubernetes-upgrade-907863 has defined MAC address 52:54:00:f6:6f:99 in network mk-kubernetes-upgrade-907863
	I0806 00:14:47.816440   61720 main.go:141] libmachine: (kubernetes-upgrade-907863) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f6:6f:99", ip: ""} in network mk-kubernetes-upgrade-907863: {Iface:virbr4 ExpiryTime:2024-08-06 01:14:38 +0000 UTC Type:0 Mac:52:54:00:f6:6f:99 Iaid: IPaddr:192.168.72.112 Prefix:24 Hostname:kubernetes-upgrade-907863 Clientid:01:52:54:00:f6:6f:99}
	I0806 00:14:47.816466   61720 main.go:141] libmachine: (kubernetes-upgrade-907863) DBG | domain kubernetes-upgrade-907863 has defined IP address 192.168.72.112 and MAC address 52:54:00:f6:6f:99 in network mk-kubernetes-upgrade-907863
	I0806 00:14:47.816644   61720 ssh_runner.go:195] Run: grep 192.168.72.1	host.minikube.internal$ /etc/hosts
	I0806 00:14:47.821182   61720 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.72.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0806 00:14:47.834308   61720 kubeadm.go:883] updating cluster {Name:kubernetes-upgrade-907863 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19339/minikube-v1.33.1-1722248113-19339-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVe
rsion:v1.20.0 ClusterName:kubernetes-upgrade-907863 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.72.112 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimi
zations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0806 00:14:47.834420   61720 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime crio
	I0806 00:14:47.834474   61720 ssh_runner.go:195] Run: sudo crictl images --output json
	I0806 00:14:47.868197   61720 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.20.0". assuming images are not preloaded.
	I0806 00:14:47.868274   61720 ssh_runner.go:195] Run: which lz4
	I0806 00:14:47.872506   61720 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I0806 00:14:47.877108   61720 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0806 00:14:47.877144   61720 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19373-9606/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (473237281 bytes)
	I0806 00:14:49.556736   61720 crio.go:462] duration metric: took 1.684254918s to copy over tarball
	I0806 00:14:49.556831   61720 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0806 00:14:53.014036   62278 start.go:356] acquired machines lock for "stopped-upgrade-936666" in 6.268019049s
	I0806 00:14:53.014087   62278 start.go:91] Provisioning new machine with config: &{Name:stopped-upgrade-936666 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.26.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.32@sha256:9190bd2393eae887316c97a74370b7d5dad8f0b2ef91ac2662bc36f7ef8e0b95 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.24.1 ClusterName:stopp
ed-upgrade-936666 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.24.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:fal
se CustomQemuFirmwarePath:} &{Name: IP: Port:8443 KubernetesVersion:v1.24.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0806 00:14:53.014197   62278 start.go:131] createHost starting for "" (driver="kvm2")
	I0806 00:14:52.768207   62044 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0806 00:14:52.768237   62044 machine.go:97] duration metric: took 6.398573772s to provisionDockerMachine
	I0806 00:14:52.768252   62044 start.go:293] postStartSetup for "pause-161508" (driver="kvm2")
	I0806 00:14:52.768266   62044 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0806 00:14:52.768286   62044 main.go:141] libmachine: (pause-161508) Calling .DriverName
	I0806 00:14:52.768771   62044 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0806 00:14:52.768800   62044 main.go:141] libmachine: (pause-161508) Calling .GetSSHHostname
	I0806 00:14:52.772553   62044 main.go:141] libmachine: (pause-161508) DBG | domain pause-161508 has defined MAC address 52:54:00:33:8e:29 in network mk-pause-161508
	I0806 00:14:52.773026   62044 main.go:141] libmachine: (pause-161508) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:33:8e:29", ip: ""} in network mk-pause-161508: {Iface:virbr3 ExpiryTime:2024-08-06 01:13:19 +0000 UTC Type:0 Mac:52:54:00:33:8e:29 Iaid: IPaddr:192.168.39.118 Prefix:24 Hostname:pause-161508 Clientid:01:52:54:00:33:8e:29}
	I0806 00:14:52.773057   62044 main.go:141] libmachine: (pause-161508) DBG | domain pause-161508 has defined IP address 192.168.39.118 and MAC address 52:54:00:33:8e:29 in network mk-pause-161508
	I0806 00:14:52.773385   62044 main.go:141] libmachine: (pause-161508) Calling .GetSSHPort
	I0806 00:14:52.773599   62044 main.go:141] libmachine: (pause-161508) Calling .GetSSHKeyPath
	I0806 00:14:52.773756   62044 main.go:141] libmachine: (pause-161508) Calling .GetSSHUsername
	I0806 00:14:52.774022   62044 sshutil.go:53] new ssh client: &{IP:192.168.39.118 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19373-9606/.minikube/machines/pause-161508/id_rsa Username:docker}
	I0806 00:14:52.858402   62044 ssh_runner.go:195] Run: cat /etc/os-release
	I0806 00:14:52.864471   62044 info.go:137] Remote host: Buildroot 2023.02.9
	I0806 00:14:52.864505   62044 filesync.go:126] Scanning /home/jenkins/minikube-integration/19373-9606/.minikube/addons for local assets ...
	I0806 00:14:52.864570   62044 filesync.go:126] Scanning /home/jenkins/minikube-integration/19373-9606/.minikube/files for local assets ...
	I0806 00:14:52.864674   62044 filesync.go:149] local asset: /home/jenkins/minikube-integration/19373-9606/.minikube/files/etc/ssl/certs/167922.pem -> 167922.pem in /etc/ssl/certs
	I0806 00:14:52.864774   62044 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0806 00:14:52.875107   62044 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19373-9606/.minikube/files/etc/ssl/certs/167922.pem --> /etc/ssl/certs/167922.pem (1708 bytes)
	I0806 00:14:52.902994   62044 start.go:296] duration metric: took 134.72929ms for postStartSetup
	I0806 00:14:52.903034   62044 fix.go:56] duration metric: took 6.558964017s for fixHost
	I0806 00:14:52.903069   62044 main.go:141] libmachine: (pause-161508) Calling .GetSSHHostname
	I0806 00:14:52.905787   62044 main.go:141] libmachine: (pause-161508) DBG | domain pause-161508 has defined MAC address 52:54:00:33:8e:29 in network mk-pause-161508
	I0806 00:14:52.906163   62044 main.go:141] libmachine: (pause-161508) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:33:8e:29", ip: ""} in network mk-pause-161508: {Iface:virbr3 ExpiryTime:2024-08-06 01:13:19 +0000 UTC Type:0 Mac:52:54:00:33:8e:29 Iaid: IPaddr:192.168.39.118 Prefix:24 Hostname:pause-161508 Clientid:01:52:54:00:33:8e:29}
	I0806 00:14:52.906193   62044 main.go:141] libmachine: (pause-161508) DBG | domain pause-161508 has defined IP address 192.168.39.118 and MAC address 52:54:00:33:8e:29 in network mk-pause-161508
	I0806 00:14:52.906354   62044 main.go:141] libmachine: (pause-161508) Calling .GetSSHPort
	I0806 00:14:52.906552   62044 main.go:141] libmachine: (pause-161508) Calling .GetSSHKeyPath
	I0806 00:14:52.906724   62044 main.go:141] libmachine: (pause-161508) Calling .GetSSHKeyPath
	I0806 00:14:52.906870   62044 main.go:141] libmachine: (pause-161508) Calling .GetSSHUsername
	I0806 00:14:52.907046   62044 main.go:141] libmachine: Using SSH client type: native
	I0806 00:14:52.907265   62044 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.118 22 <nil> <nil>}
	I0806 00:14:52.907276   62044 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0806 00:14:53.013802   62044 main.go:141] libmachine: SSH cmd err, output: <nil>: 1722903293.008614911
	
	I0806 00:14:53.013833   62044 fix.go:216] guest clock: 1722903293.008614911
	I0806 00:14:53.013843   62044 fix.go:229] Guest: 2024-08-06 00:14:53.008614911 +0000 UTC Remote: 2024-08-06 00:14:52.903038034 +0000 UTC m=+11.159767359 (delta=105.576877ms)
	I0806 00:14:53.013868   62044 fix.go:200] guest clock delta is within tolerance: 105.576877ms
	I0806 00:14:53.013875   62044 start.go:83] releasing machines lock for "pause-161508", held for 6.669834897s
	I0806 00:14:53.013902   62044 main.go:141] libmachine: (pause-161508) Calling .DriverName
	I0806 00:14:53.014197   62044 main.go:141] libmachine: (pause-161508) Calling .GetIP
	I0806 00:14:53.017386   62044 main.go:141] libmachine: (pause-161508) DBG | domain pause-161508 has defined MAC address 52:54:00:33:8e:29 in network mk-pause-161508
	I0806 00:14:53.017783   62044 main.go:141] libmachine: (pause-161508) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:33:8e:29", ip: ""} in network mk-pause-161508: {Iface:virbr3 ExpiryTime:2024-08-06 01:13:19 +0000 UTC Type:0 Mac:52:54:00:33:8e:29 Iaid: IPaddr:192.168.39.118 Prefix:24 Hostname:pause-161508 Clientid:01:52:54:00:33:8e:29}
	I0806 00:14:53.017818   62044 main.go:141] libmachine: (pause-161508) DBG | domain pause-161508 has defined IP address 192.168.39.118 and MAC address 52:54:00:33:8e:29 in network mk-pause-161508
	I0806 00:14:53.017973   62044 main.go:141] libmachine: (pause-161508) Calling .DriverName
	I0806 00:14:53.018597   62044 main.go:141] libmachine: (pause-161508) Calling .DriverName
	I0806 00:14:53.018807   62044 main.go:141] libmachine: (pause-161508) Calling .DriverName
	I0806 00:14:53.018919   62044 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0806 00:14:53.018957   62044 main.go:141] libmachine: (pause-161508) Calling .GetSSHHostname
	I0806 00:14:53.018977   62044 ssh_runner.go:195] Run: cat /version.json
	I0806 00:14:53.019001   62044 main.go:141] libmachine: (pause-161508) Calling .GetSSHHostname
	I0806 00:14:53.021792   62044 main.go:141] libmachine: (pause-161508) DBG | domain pause-161508 has defined MAC address 52:54:00:33:8e:29 in network mk-pause-161508
	I0806 00:14:53.022186   62044 main.go:141] libmachine: (pause-161508) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:33:8e:29", ip: ""} in network mk-pause-161508: {Iface:virbr3 ExpiryTime:2024-08-06 01:13:19 +0000 UTC Type:0 Mac:52:54:00:33:8e:29 Iaid: IPaddr:192.168.39.118 Prefix:24 Hostname:pause-161508 Clientid:01:52:54:00:33:8e:29}
	I0806 00:14:53.022211   62044 main.go:141] libmachine: (pause-161508) DBG | domain pause-161508 has defined IP address 192.168.39.118 and MAC address 52:54:00:33:8e:29 in network mk-pause-161508
	I0806 00:14:53.022232   62044 main.go:141] libmachine: (pause-161508) DBG | domain pause-161508 has defined MAC address 52:54:00:33:8e:29 in network mk-pause-161508
	I0806 00:14:53.022456   62044 main.go:141] libmachine: (pause-161508) Calling .GetSSHPort
	I0806 00:14:53.022705   62044 main.go:141] libmachine: (pause-161508) Calling .GetSSHKeyPath
	I0806 00:14:53.022735   62044 main.go:141] libmachine: (pause-161508) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:33:8e:29", ip: ""} in network mk-pause-161508: {Iface:virbr3 ExpiryTime:2024-08-06 01:13:19 +0000 UTC Type:0 Mac:52:54:00:33:8e:29 Iaid: IPaddr:192.168.39.118 Prefix:24 Hostname:pause-161508 Clientid:01:52:54:00:33:8e:29}
	I0806 00:14:53.022761   62044 main.go:141] libmachine: (pause-161508) DBG | domain pause-161508 has defined IP address 192.168.39.118 and MAC address 52:54:00:33:8e:29 in network mk-pause-161508
	I0806 00:14:53.022922   62044 main.go:141] libmachine: (pause-161508) Calling .GetSSHUsername
	I0806 00:14:53.022980   62044 main.go:141] libmachine: (pause-161508) Calling .GetSSHPort
	I0806 00:14:53.023161   62044 sshutil.go:53] new ssh client: &{IP:192.168.39.118 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19373-9606/.minikube/machines/pause-161508/id_rsa Username:docker}
	I0806 00:14:53.023305   62044 main.go:141] libmachine: (pause-161508) Calling .GetSSHKeyPath
	I0806 00:14:53.023458   62044 main.go:141] libmachine: (pause-161508) Calling .GetSSHUsername
	I0806 00:14:53.023609   62044 sshutil.go:53] new ssh client: &{IP:192.168.39.118 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19373-9606/.minikube/machines/pause-161508/id_rsa Username:docker}
	I0806 00:14:53.100662   62044 ssh_runner.go:195] Run: systemctl --version
	I0806 00:14:53.127867   62044 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0806 00:14:53.286775   62044 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0806 00:14:53.298035   62044 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0806 00:14:53.298114   62044 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0806 00:14:53.310993   62044 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I0806 00:14:53.311022   62044 start.go:495] detecting cgroup driver to use...
	I0806 00:14:53.311132   62044 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0806 00:14:53.334023   62044 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0806 00:14:53.356468   62044 docker.go:217] disabling cri-docker service (if available) ...
	I0806 00:14:53.356540   62044 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0806 00:14:53.376425   62044 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0806 00:14:53.395643   62044 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0806 00:14:53.562361   62044 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0806 00:14:53.750649   62044 docker.go:233] disabling docker service ...
	I0806 00:14:53.750739   62044 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0806 00:14:53.770975   62044 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0806 00:14:53.787945   62044 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0806 00:14:53.968997   62044 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0806 00:14:54.130928   62044 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0806 00:14:54.149110   62044 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0806 00:14:54.171520   62044 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I0806 00:14:54.171594   62044 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0806 00:14:54.184680   62044 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0806 00:14:54.184743   62044 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0806 00:14:54.197904   62044 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0806 00:14:54.210910   62044 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0806 00:14:54.225671   62044 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0806 00:14:54.238316   62044 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0806 00:14:54.251342   62044 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0806 00:14:54.263693   62044 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0806 00:14:54.275743   62044 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0806 00:14:54.286930   62044 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0806 00:14:54.298370   62044 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0806 00:14:54.444776   62044 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0806 00:14:55.565977   62044 ssh_runner.go:235] Completed: sudo systemctl restart crio: (1.121158698s)
	I0806 00:14:55.566012   62044 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0806 00:14:55.566063   62044 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0806 00:14:55.572805   62044 start.go:563] Will wait 60s for crictl version
	I0806 00:14:55.572876   62044 ssh_runner.go:195] Run: which crictl
	I0806 00:14:55.578202   62044 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0806 00:14:55.634895   62044 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0806 00:14:55.634995   62044 ssh_runner.go:195] Run: crio --version
	I0806 00:14:55.675524   62044 ssh_runner.go:195] Run: crio --version
	I0806 00:14:55.718988   62044 out.go:177] * Preparing Kubernetes v1.30.3 on CRI-O 1.29.1 ...
	I0806 00:14:53.098821   62278 out.go:204] * Creating kvm2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0806 00:14:53.099126   62278 main.go:134] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0806 00:14:53.099171   62278 main.go:134] libmachine: Launching plugin server for driver kvm2
	I0806 00:14:53.118437   62278 main.go:134] libmachine: Plugin server listening at address 127.0.0.1:42441
	I0806 00:14:53.118853   62278 main.go:134] libmachine: () Calling .GetVersion
	I0806 00:14:53.119475   62278 main.go:134] libmachine: Using API Version  1
	I0806 00:14:53.119494   62278 main.go:134] libmachine: () Calling .SetConfigRaw
	I0806 00:14:53.119844   62278 main.go:134] libmachine: () Calling .GetMachineName
	I0806 00:14:53.120063   62278 main.go:134] libmachine: (stopped-upgrade-936666) Calling .GetMachineName
	I0806 00:14:53.120245   62278 main.go:134] libmachine: (stopped-upgrade-936666) Calling .DriverName
	I0806 00:14:53.120406   62278 start.go:165] libmachine.API.Create for "stopped-upgrade-936666" (driver="kvm2")
	I0806 00:14:53.120426   62278 client.go:168] LocalClient.Create starting
	I0806 00:14:53.120453   62278 main.go:134] libmachine: Reading certificate data from /home/jenkins/minikube-integration/19373-9606/.minikube/certs/ca.pem
	I0806 00:14:53.120481   62278 main.go:134] libmachine: Decoding PEM data...
	I0806 00:14:53.120499   62278 main.go:134] libmachine: Parsing certificate...
	I0806 00:14:53.120562   62278 main.go:134] libmachine: Reading certificate data from /home/jenkins/minikube-integration/19373-9606/.minikube/certs/cert.pem
	I0806 00:14:53.120575   62278 main.go:134] libmachine: Decoding PEM data...
	I0806 00:14:53.120583   62278 main.go:134] libmachine: Parsing certificate...
	I0806 00:14:53.120596   62278 main.go:134] libmachine: Running pre-create checks...
	I0806 00:14:53.120602   62278 main.go:134] libmachine: (stopped-upgrade-936666) Calling .PreCreateCheck
	I0806 00:14:53.121013   62278 main.go:134] libmachine: (stopped-upgrade-936666) Calling .GetConfigRaw
	I0806 00:14:53.121506   62278 main.go:134] libmachine: Creating machine...
	I0806 00:14:53.121515   62278 main.go:134] libmachine: (stopped-upgrade-936666) Calling .Create
	I0806 00:14:53.121673   62278 main.go:134] libmachine: (stopped-upgrade-936666) Creating KVM machine...
	I0806 00:14:53.123023   62278 main.go:134] libmachine: (stopped-upgrade-936666) DBG | found existing default KVM network
	I0806 00:14:53.124320   62278 main.go:134] libmachine: (stopped-upgrade-936666) DBG | I0806 00:14:53.124156   62316 network.go:211] skipping subnet 192.168.39.0/24 that is taken: &{IP:192.168.39.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.39.0/24 Gateway:192.168.39.1 ClientMin:192.168.39.2 ClientMax:192.168.39.254 Broadcast:192.168.39.255 IsPrivate:true Interface:{IfaceName:virbr3 IfaceIPv4:192.168.39.1 IfaceMTU:1500 IfaceMAC:52:54:00:48:19:b6} reservation:<nil>}
	I0806 00:14:53.125074   62278 main.go:134] libmachine: (stopped-upgrade-936666) DBG | I0806 00:14:53.124977   62316 network.go:211] skipping subnet 192.168.50.0/24 that is taken: &{IP:192.168.50.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.50.0/24 Gateway:192.168.50.1 ClientMin:192.168.50.2 ClientMax:192.168.50.254 Broadcast:192.168.50.255 IsPrivate:true Interface:{IfaceName:virbr2 IfaceIPv4:192.168.50.1 IfaceMTU:1500 IfaceMAC:52:54:00:9b:c7:ec} reservation:<nil>}
	I0806 00:14:53.126154   62278 main.go:134] libmachine: (stopped-upgrade-936666) DBG | I0806 00:14:53.126069   62316 network.go:206] using free private subnet 192.168.61.0/24: &{IP:192.168.61.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.61.0/24 Gateway:192.168.61.1 ClientMin:192.168.61.2 ClientMax:192.168.61.254 Broadcast:192.168.61.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc0002890f0}
	I0806 00:14:53.126182   62278 main.go:134] libmachine: (stopped-upgrade-936666) DBG | created network xml: 
	I0806 00:14:53.126200   62278 main.go:134] libmachine: (stopped-upgrade-936666) DBG | <network>
	I0806 00:14:53.126208   62278 main.go:134] libmachine: (stopped-upgrade-936666) DBG |   <name>mk-stopped-upgrade-936666</name>
	I0806 00:14:53.126214   62278 main.go:134] libmachine: (stopped-upgrade-936666) DBG |   <dns enable='no'/>
	I0806 00:14:53.126220   62278 main.go:134] libmachine: (stopped-upgrade-936666) DBG |   
	I0806 00:14:53.126225   62278 main.go:134] libmachine: (stopped-upgrade-936666) DBG |   <ip address='192.168.61.1' netmask='255.255.255.0'>
	I0806 00:14:53.126231   62278 main.go:134] libmachine: (stopped-upgrade-936666) DBG |     <dhcp>
	I0806 00:14:53.126237   62278 main.go:134] libmachine: (stopped-upgrade-936666) DBG |       <range start='192.168.61.2' end='192.168.61.253'/>
	I0806 00:14:53.126247   62278 main.go:134] libmachine: (stopped-upgrade-936666) DBG |     </dhcp>
	I0806 00:14:53.126254   62278 main.go:134] libmachine: (stopped-upgrade-936666) DBG |   </ip>
	I0806 00:14:53.126261   62278 main.go:134] libmachine: (stopped-upgrade-936666) DBG |   
	I0806 00:14:53.126268   62278 main.go:134] libmachine: (stopped-upgrade-936666) DBG | </network>
	I0806 00:14:53.126277   62278 main.go:134] libmachine: (stopped-upgrade-936666) DBG | 
	I0806 00:14:53.245778   62278 main.go:134] libmachine: (stopped-upgrade-936666) DBG | trying to create private KVM network mk-stopped-upgrade-936666 192.168.61.0/24...
	I0806 00:14:53.321935   62278 main.go:134] libmachine: (stopped-upgrade-936666) DBG | private KVM network mk-stopped-upgrade-936666 192.168.61.0/24 created
	I0806 00:14:53.322043   62278 main.go:134] libmachine: (stopped-upgrade-936666) Setting up store path in /home/jenkins/minikube-integration/19373-9606/.minikube/machines/stopped-upgrade-936666 ...
	I0806 00:14:53.322189   62278 main.go:134] libmachine: (stopped-upgrade-936666) Building disk image from file:///home/jenkins/minikube-integration/19373-9606/.minikube/cache/iso/amd64/minikube-v1.26.0-amd64.iso
	I0806 00:14:53.322216   62278 main.go:134] libmachine: (stopped-upgrade-936666) DBG | I0806 00:14:53.322117   62316 common.go:145] Making disk image using store path: /home/jenkins/minikube-integration/19373-9606/.minikube
	I0806 00:14:53.322308   62278 main.go:134] libmachine: (stopped-upgrade-936666) Downloading /home/jenkins/minikube-integration/19373-9606/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/19373-9606/.minikube/cache/iso/amd64/minikube-v1.26.0-amd64.iso...
	I0806 00:14:53.539956   62278 main.go:134] libmachine: (stopped-upgrade-936666) DBG | I0806 00:14:53.539797   62316 common.go:152] Creating ssh key: /home/jenkins/minikube-integration/19373-9606/.minikube/machines/stopped-upgrade-936666/id_rsa...
	I0806 00:14:53.598845   62278 main.go:134] libmachine: (stopped-upgrade-936666) DBG | I0806 00:14:53.598675   62316 common.go:158] Creating raw disk image: /home/jenkins/minikube-integration/19373-9606/.minikube/machines/stopped-upgrade-936666/stopped-upgrade-936666.rawdisk...
	I0806 00:14:53.598871   62278 main.go:134] libmachine: (stopped-upgrade-936666) DBG | Writing magic tar header
	I0806 00:14:53.598892   62278 main.go:134] libmachine: (stopped-upgrade-936666) DBG | Writing SSH key tar header
	I0806 00:14:53.598907   62278 main.go:134] libmachine: (stopped-upgrade-936666) DBG | I0806 00:14:53.598785   62316 common.go:172] Fixing permissions on /home/jenkins/minikube-integration/19373-9606/.minikube/machines/stopped-upgrade-936666 ...
	I0806 00:14:53.598921   62278 main.go:134] libmachine: (stopped-upgrade-936666) Setting executable bit set on /home/jenkins/minikube-integration/19373-9606/.minikube/machines/stopped-upgrade-936666 (perms=drwx------)
	I0806 00:14:53.598933   62278 main.go:134] libmachine: (stopped-upgrade-936666) Setting executable bit set on /home/jenkins/minikube-integration/19373-9606/.minikube/machines (perms=drwxr-xr-x)
	I0806 00:14:53.598940   62278 main.go:134] libmachine: (stopped-upgrade-936666) Setting executable bit set on /home/jenkins/minikube-integration/19373-9606/.minikube (perms=drwxr-xr-x)
	I0806 00:14:53.598948   62278 main.go:134] libmachine: (stopped-upgrade-936666) Setting executable bit set on /home/jenkins/minikube-integration/19373-9606 (perms=drwxrwxr-x)
	I0806 00:14:53.598956   62278 main.go:134] libmachine: (stopped-upgrade-936666) Setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I0806 00:14:53.598969   62278 main.go:134] libmachine: (stopped-upgrade-936666) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19373-9606/.minikube/machines/stopped-upgrade-936666
	I0806 00:14:53.598993   62278 main.go:134] libmachine: (stopped-upgrade-936666) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19373-9606/.minikube/machines
	I0806 00:14:53.599003   62278 main.go:134] libmachine: (stopped-upgrade-936666) Setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I0806 00:14:53.599009   62278 main.go:134] libmachine: (stopped-upgrade-936666) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19373-9606/.minikube
	I0806 00:14:53.599019   62278 main.go:134] libmachine: (stopped-upgrade-936666) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19373-9606
	I0806 00:14:53.599035   62278 main.go:134] libmachine: (stopped-upgrade-936666) DBG | Checking permissions on dir: /home/jenkins/minikube-integration
	I0806 00:14:53.599040   62278 main.go:134] libmachine: (stopped-upgrade-936666) Creating domain...
	I0806 00:14:53.599100   62278 main.go:134] libmachine: (stopped-upgrade-936666) DBG | Checking permissions on dir: /home/jenkins
	I0806 00:14:53.599121   62278 main.go:134] libmachine: (stopped-upgrade-936666) DBG | Checking permissions on dir: /home
	I0806 00:14:53.599139   62278 main.go:134] libmachine: (stopped-upgrade-936666) DBG | Skipping /home - not owner
	I0806 00:14:53.600390   62278 main.go:134] libmachine: (stopped-upgrade-936666) define libvirt domain using xml: 
	I0806 00:14:53.600412   62278 main.go:134] libmachine: (stopped-upgrade-936666) <domain type='kvm'>
	I0806 00:14:53.600423   62278 main.go:134] libmachine: (stopped-upgrade-936666)   <name>stopped-upgrade-936666</name>
	I0806 00:14:53.600434   62278 main.go:134] libmachine: (stopped-upgrade-936666)   <memory unit='MiB'>2200</memory>
	I0806 00:14:53.600440   62278 main.go:134] libmachine: (stopped-upgrade-936666)   <vcpu>2</vcpu>
	I0806 00:14:53.600449   62278 main.go:134] libmachine: (stopped-upgrade-936666)   <features>
	I0806 00:14:53.600454   62278 main.go:134] libmachine: (stopped-upgrade-936666)     <acpi/>
	I0806 00:14:53.600459   62278 main.go:134] libmachine: (stopped-upgrade-936666)     <apic/>
	I0806 00:14:53.600464   62278 main.go:134] libmachine: (stopped-upgrade-936666)     <pae/>
	I0806 00:14:53.600468   62278 main.go:134] libmachine: (stopped-upgrade-936666)     
	I0806 00:14:53.600474   62278 main.go:134] libmachine: (stopped-upgrade-936666)   </features>
	I0806 00:14:53.600479   62278 main.go:134] libmachine: (stopped-upgrade-936666)   <cpu mode='host-passthrough'>
	I0806 00:14:53.600484   62278 main.go:134] libmachine: (stopped-upgrade-936666)   
	I0806 00:14:53.600488   62278 main.go:134] libmachine: (stopped-upgrade-936666)   </cpu>
	I0806 00:14:53.600493   62278 main.go:134] libmachine: (stopped-upgrade-936666)   <os>
	I0806 00:14:53.600497   62278 main.go:134] libmachine: (stopped-upgrade-936666)     <type>hvm</type>
	I0806 00:14:53.600503   62278 main.go:134] libmachine: (stopped-upgrade-936666)     <boot dev='cdrom'/>
	I0806 00:14:53.600507   62278 main.go:134] libmachine: (stopped-upgrade-936666)     <boot dev='hd'/>
	I0806 00:14:53.600512   62278 main.go:134] libmachine: (stopped-upgrade-936666)     <bootmenu enable='no'/>
	I0806 00:14:53.600518   62278 main.go:134] libmachine: (stopped-upgrade-936666)   </os>
	I0806 00:14:53.600523   62278 main.go:134] libmachine: (stopped-upgrade-936666)   <devices>
	I0806 00:14:53.600528   62278 main.go:134] libmachine: (stopped-upgrade-936666)     <disk type='file' device='cdrom'>
	I0806 00:14:53.600538   62278 main.go:134] libmachine: (stopped-upgrade-936666)       <source file='/home/jenkins/minikube-integration/19373-9606/.minikube/machines/stopped-upgrade-936666/boot2docker.iso'/>
	I0806 00:14:53.600542   62278 main.go:134] libmachine: (stopped-upgrade-936666)       <target dev='hdc' bus='scsi'/>
	I0806 00:14:53.600548   62278 main.go:134] libmachine: (stopped-upgrade-936666)       <readonly/>
	I0806 00:14:53.600552   62278 main.go:134] libmachine: (stopped-upgrade-936666)     </disk>
	I0806 00:14:53.600558   62278 main.go:134] libmachine: (stopped-upgrade-936666)     <disk type='file' device='disk'>
	I0806 00:14:53.600564   62278 main.go:134] libmachine: (stopped-upgrade-936666)       <driver name='qemu' type='raw' cache='default' io='threads' />
	I0806 00:14:53.600587   62278 main.go:134] libmachine: (stopped-upgrade-936666)       <source file='/home/jenkins/minikube-integration/19373-9606/.minikube/machines/stopped-upgrade-936666/stopped-upgrade-936666.rawdisk'/>
	I0806 00:14:53.600597   62278 main.go:134] libmachine: (stopped-upgrade-936666)       <target dev='hda' bus='virtio'/>
	I0806 00:14:53.600602   62278 main.go:134] libmachine: (stopped-upgrade-936666)     </disk>
	I0806 00:14:53.600608   62278 main.go:134] libmachine: (stopped-upgrade-936666)     <interface type='network'>
	I0806 00:14:53.600615   62278 main.go:134] libmachine: (stopped-upgrade-936666)       <source network='mk-stopped-upgrade-936666'/>
	I0806 00:14:53.600620   62278 main.go:134] libmachine: (stopped-upgrade-936666)       <model type='virtio'/>
	I0806 00:14:53.600625   62278 main.go:134] libmachine: (stopped-upgrade-936666)     </interface>
	I0806 00:14:53.600633   62278 main.go:134] libmachine: (stopped-upgrade-936666)     <interface type='network'>
	I0806 00:14:53.600639   62278 main.go:134] libmachine: (stopped-upgrade-936666)       <source network='default'/>
	I0806 00:14:53.600644   62278 main.go:134] libmachine: (stopped-upgrade-936666)       <model type='virtio'/>
	I0806 00:14:53.600648   62278 main.go:134] libmachine: (stopped-upgrade-936666)     </interface>
	I0806 00:14:53.600653   62278 main.go:134] libmachine: (stopped-upgrade-936666)     <serial type='pty'>
	I0806 00:14:53.600658   62278 main.go:134] libmachine: (stopped-upgrade-936666)       <target port='0'/>
	I0806 00:14:53.600666   62278 main.go:134] libmachine: (stopped-upgrade-936666)     </serial>
	I0806 00:14:53.600671   62278 main.go:134] libmachine: (stopped-upgrade-936666)     <console type='pty'>
	I0806 00:14:53.600676   62278 main.go:134] libmachine: (stopped-upgrade-936666)       <target type='serial' port='0'/>
	I0806 00:14:53.600681   62278 main.go:134] libmachine: (stopped-upgrade-936666)     </console>
	I0806 00:14:53.600685   62278 main.go:134] libmachine: (stopped-upgrade-936666)     <rng model='virtio'>
	I0806 00:14:53.600691   62278 main.go:134] libmachine: (stopped-upgrade-936666)       <backend model='random'>/dev/random</backend>
	I0806 00:14:53.600697   62278 main.go:134] libmachine: (stopped-upgrade-936666)     </rng>
	I0806 00:14:53.600702   62278 main.go:134] libmachine: (stopped-upgrade-936666)     
	I0806 00:14:53.600706   62278 main.go:134] libmachine: (stopped-upgrade-936666)     
	I0806 00:14:53.600711   62278 main.go:134] libmachine: (stopped-upgrade-936666)   </devices>
	I0806 00:14:53.600716   62278 main.go:134] libmachine: (stopped-upgrade-936666) </domain>
	I0806 00:14:53.600724   62278 main.go:134] libmachine: (stopped-upgrade-936666) 
	I0806 00:14:53.671013   62278 main.go:134] libmachine: (stopped-upgrade-936666) DBG | domain stopped-upgrade-936666 has defined MAC address 52:54:00:aa:80:22 in network default
	I0806 00:14:53.671732   62278 main.go:134] libmachine: (stopped-upgrade-936666) DBG | domain stopped-upgrade-936666 has defined MAC address 52:54:00:7b:85:58 in network mk-stopped-upgrade-936666
	I0806 00:14:53.671808   62278 main.go:134] libmachine: (stopped-upgrade-936666) Ensuring networks are active...
	I0806 00:14:53.672684   62278 main.go:134] libmachine: (stopped-upgrade-936666) Ensuring network default is active
	I0806 00:14:53.673005   62278 main.go:134] libmachine: (stopped-upgrade-936666) Ensuring network mk-stopped-upgrade-936666 is active
	I0806 00:14:53.673659   62278 main.go:134] libmachine: (stopped-upgrade-936666) Getting domain xml...
	I0806 00:14:53.674706   62278 main.go:134] libmachine: (stopped-upgrade-936666) Creating domain...
	I0806 00:14:55.890304   62278 main.go:134] libmachine: (stopped-upgrade-936666) Waiting to get IP...
	I0806 00:14:55.891201   62278 main.go:134] libmachine: (stopped-upgrade-936666) DBG | domain stopped-upgrade-936666 has defined MAC address 52:54:00:7b:85:58 in network mk-stopped-upgrade-936666
	I0806 00:14:55.891630   62278 main.go:134] libmachine: (stopped-upgrade-936666) DBG | unable to find current IP address of domain stopped-upgrade-936666 in network mk-stopped-upgrade-936666
	I0806 00:14:55.891656   62278 main.go:134] libmachine: (stopped-upgrade-936666) DBG | I0806 00:14:55.891621   62316 retry.go:31] will retry after 294.68666ms: waiting for machine to come up
	I0806 00:14:56.188405   62278 main.go:134] libmachine: (stopped-upgrade-936666) DBG | domain stopped-upgrade-936666 has defined MAC address 52:54:00:7b:85:58 in network mk-stopped-upgrade-936666
	I0806 00:14:56.188887   62278 main.go:134] libmachine: (stopped-upgrade-936666) DBG | unable to find current IP address of domain stopped-upgrade-936666 in network mk-stopped-upgrade-936666
	I0806 00:14:56.188907   62278 main.go:134] libmachine: (stopped-upgrade-936666) DBG | I0806 00:14:56.188835   62316 retry.go:31] will retry after 311.048191ms: waiting for machine to come up
	I0806 00:14:56.501266   62278 main.go:134] libmachine: (stopped-upgrade-936666) DBG | domain stopped-upgrade-936666 has defined MAC address 52:54:00:7b:85:58 in network mk-stopped-upgrade-936666
	I0806 00:14:56.501793   62278 main.go:134] libmachine: (stopped-upgrade-936666) DBG | unable to find current IP address of domain stopped-upgrade-936666 in network mk-stopped-upgrade-936666
	I0806 00:14:56.501910   62278 main.go:134] libmachine: (stopped-upgrade-936666) DBG | I0806 00:14:56.501852   62316 retry.go:31] will retry after 347.169902ms: waiting for machine to come up
	I0806 00:14:55.720363   62044 main.go:141] libmachine: (pause-161508) Calling .GetIP
	I0806 00:14:55.723606   62044 main.go:141] libmachine: (pause-161508) DBG | domain pause-161508 has defined MAC address 52:54:00:33:8e:29 in network mk-pause-161508
	I0806 00:14:55.723915   62044 main.go:141] libmachine: (pause-161508) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:33:8e:29", ip: ""} in network mk-pause-161508: {Iface:virbr3 ExpiryTime:2024-08-06 01:13:19 +0000 UTC Type:0 Mac:52:54:00:33:8e:29 Iaid: IPaddr:192.168.39.118 Prefix:24 Hostname:pause-161508 Clientid:01:52:54:00:33:8e:29}
	I0806 00:14:55.723945   62044 main.go:141] libmachine: (pause-161508) DBG | domain pause-161508 has defined IP address 192.168.39.118 and MAC address 52:54:00:33:8e:29 in network mk-pause-161508
	I0806 00:14:55.724210   62044 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0806 00:14:55.730905   62044 kubeadm.go:883] updating cluster {Name:pause-161508 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19339/minikube-v1.33.1-1722248113-19339-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2048 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.3
ClusterName:pause-161508 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.118 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:fals
e olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0806 00:14:55.731109   62044 preload.go:131] Checking if preload exists for k8s version v1.30.3 and runtime crio
	I0806 00:14:55.731169   62044 ssh_runner.go:195] Run: sudo crictl images --output json
	I0806 00:14:55.795823   62044 crio.go:514] all images are preloaded for cri-o runtime.
	I0806 00:14:55.795857   62044 crio.go:433] Images already preloaded, skipping extraction
	I0806 00:14:55.795919   62044 ssh_runner.go:195] Run: sudo crictl images --output json
	I0806 00:14:55.836123   62044 crio.go:514] all images are preloaded for cri-o runtime.
	I0806 00:14:55.836147   62044 cache_images.go:84] Images are preloaded, skipping loading
	I0806 00:14:55.836157   62044 kubeadm.go:934] updating node { 192.168.39.118 8443 v1.30.3 crio true true} ...
	I0806 00:14:55.836287   62044 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.30.3/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=pause-161508 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.118
	
	[Install]
	 config:
	{KubernetesVersion:v1.30.3 ClusterName:pause-161508 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0806 00:14:55.836381   62044 ssh_runner.go:195] Run: crio config
	I0806 00:14:55.889286   62044 cni.go:84] Creating CNI manager for ""
	I0806 00:14:55.889312   62044 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0806 00:14:55.889323   62044 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0806 00:14:55.889351   62044 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.118 APIServerPort:8443 KubernetesVersion:v1.30.3 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:pause-161508 NodeName:pause-161508 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.118"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.118 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kub
ernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0806 00:14:55.889555   62044 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.118
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "pause-161508"
	  kubeletExtraArgs:
	    node-ip: 192.168.39.118
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.118"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.30.3
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0806 00:14:55.889623   62044 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.30.3
	I0806 00:14:55.932046   62044 binaries.go:44] Found k8s binaries, skipping transfer
	I0806 00:14:55.932135   62044 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0806 00:14:55.950419   62044 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (312 bytes)
	I0806 00:14:56.036763   62044 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0806 00:14:56.159694   62044 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2156 bytes)
	I0806 00:14:56.254787   62044 ssh_runner.go:195] Run: grep 192.168.39.118	control-plane.minikube.internal$ /etc/hosts
	I0806 00:14:56.288571   62044 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0806 00:14:56.575805   62044 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0806 00:14:56.754768   62044 certs.go:68] Setting up /home/jenkins/minikube-integration/19373-9606/.minikube/profiles/pause-161508 for IP: 192.168.39.118
	I0806 00:14:56.754791   62044 certs.go:194] generating shared ca certs ...
	I0806 00:14:56.754810   62044 certs.go:226] acquiring lock for ca certs: {Name:mkf35a042c1656d191f542eee7fa087aad4d29d8 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0806 00:14:56.755074   62044 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19373-9606/.minikube/ca.key
	I0806 00:14:56.755141   62044 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19373-9606/.minikube/proxy-client-ca.key
	I0806 00:14:56.755154   62044 certs.go:256] generating profile certs ...
	I0806 00:14:56.755260   62044 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19373-9606/.minikube/profiles/pause-161508/client.key
	I0806 00:14:56.755339   62044 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/19373-9606/.minikube/profiles/pause-161508/apiserver.key.423b175f
	I0806 00:14:56.755386   62044 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19373-9606/.minikube/profiles/pause-161508/proxy-client.key
	I0806 00:14:56.755522   62044 certs.go:484] found cert: /home/jenkins/minikube-integration/19373-9606/.minikube/certs/16792.pem (1338 bytes)
	W0806 00:14:56.755559   62044 certs.go:480] ignoring /home/jenkins/minikube-integration/19373-9606/.minikube/certs/16792_empty.pem, impossibly tiny 0 bytes
	I0806 00:14:56.755570   62044 certs.go:484] found cert: /home/jenkins/minikube-integration/19373-9606/.minikube/certs/ca-key.pem (1679 bytes)
	I0806 00:14:56.755607   62044 certs.go:484] found cert: /home/jenkins/minikube-integration/19373-9606/.minikube/certs/ca.pem (1082 bytes)
	I0806 00:14:56.755656   62044 certs.go:484] found cert: /home/jenkins/minikube-integration/19373-9606/.minikube/certs/cert.pem (1123 bytes)
	I0806 00:14:56.755693   62044 certs.go:484] found cert: /home/jenkins/minikube-integration/19373-9606/.minikube/certs/key.pem (1679 bytes)
	I0806 00:14:56.755748   62044 certs.go:484] found cert: /home/jenkins/minikube-integration/19373-9606/.minikube/files/etc/ssl/certs/167922.pem (1708 bytes)
	I0806 00:14:56.756618   62044 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19373-9606/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0806 00:14:52.132666   61720 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.575793522s)
	I0806 00:14:52.132708   61720 crio.go:469] duration metric: took 2.575934958s to extract the tarball
	I0806 00:14:52.132718   61720 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0806 00:14:52.178655   61720 ssh_runner.go:195] Run: sudo crictl images --output json
	I0806 00:14:52.228379   61720 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.20.0". assuming images are not preloaded.
	I0806 00:14:52.228410   61720 cache_images.go:88] LoadCachedImages start: [registry.k8s.io/kube-apiserver:v1.20.0 registry.k8s.io/kube-controller-manager:v1.20.0 registry.k8s.io/kube-scheduler:v1.20.0 registry.k8s.io/kube-proxy:v1.20.0 registry.k8s.io/pause:3.2 registry.k8s.io/etcd:3.4.13-0 registry.k8s.io/coredns:1.7.0 gcr.io/k8s-minikube/storage-provisioner:v5]
	I0806 00:14:52.228492   61720 image.go:134] retrieving image: registry.k8s.io/kube-proxy:v1.20.0
	I0806 00:14:52.228495   61720 image.go:134] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0806 00:14:52.228503   61720 image.go:134] retrieving image: registry.k8s.io/etcd:3.4.13-0
	I0806 00:14:52.228568   61720 image.go:134] retrieving image: registry.k8s.io/coredns:1.7.0
	I0806 00:14:52.228594   61720 image.go:134] retrieving image: registry.k8s.io/kube-scheduler:v1.20.0
	I0806 00:14:52.228592   61720 image.go:134] retrieving image: registry.k8s.io/pause:3.2
	I0806 00:14:52.228636   61720 image.go:134] retrieving image: registry.k8s.io/kube-apiserver:v1.20.0
	I0806 00:14:52.228641   61720 image.go:134] retrieving image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0806 00:14:52.229894   61720 image.go:177] daemon lookup for registry.k8s.io/kube-apiserver:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.20.0
	I0806 00:14:52.229923   61720 image.go:177] daemon lookup for registry.k8s.io/etcd:3.4.13-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.4.13-0
	I0806 00:14:52.229926   61720 image.go:177] daemon lookup for registry.k8s.io/pause:3.2: Error response from daemon: No such image: registry.k8s.io/pause:3.2
	I0806 00:14:52.229893   61720 image.go:177] daemon lookup for registry.k8s.io/coredns:1.7.0: Error response from daemon: No such image: registry.k8s.io/coredns:1.7.0
	I0806 00:14:52.229939   61720 image.go:177] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0806 00:14:52.229949   61720 image.go:177] daemon lookup for registry.k8s.io/kube-scheduler:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.20.0
	I0806 00:14:52.229901   61720 image.go:177] daemon lookup for registry.k8s.io/kube-proxy:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.20.0
	I0806 00:14:52.229956   61720 image.go:177] daemon lookup for registry.k8s.io/kube-controller-manager:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0806 00:14:52.369603   61720 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/etcd:3.4.13-0
	I0806 00:14:52.373690   61720 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/pause:3.2
	I0806 00:14:52.419211   61720 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.20.0
	I0806 00:14:52.421008   61720 cache_images.go:116] "registry.k8s.io/etcd:3.4.13-0" needs transfer: "registry.k8s.io/etcd:3.4.13-0" does not exist at hash "0369cf4303ffdb467dc219990960a9baa8512a54b0ad9283eaf55bd6c0adb934" in container runtime
	I0806 00:14:52.421051   61720 cri.go:218] Removing image: registry.k8s.io/etcd:3.4.13-0
	I0806 00:14:52.421110   61720 ssh_runner.go:195] Run: which crictl
	I0806 00:14:52.437302   61720 cache_images.go:116] "registry.k8s.io/pause:3.2" needs transfer: "registry.k8s.io/pause:3.2" does not exist at hash "80d28bedfe5dec59da9ebf8e6260224ac9008ab5c11dbbe16ee3ba3e4439ac2c" in container runtime
	I0806 00:14:52.437345   61720 cri.go:218] Removing image: registry.k8s.io/pause:3.2
	I0806 00:14:52.437393   61720 ssh_runner.go:195] Run: which crictl
	I0806 00:14:52.449950   61720 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.20.0
	I0806 00:14:52.469829   61720 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.4.13-0
	I0806 00:14:52.469876   61720 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.2
	I0806 00:14:52.470021   61720 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.20.0" needs transfer: "registry.k8s.io/kube-controller-manager:v1.20.0" does not exist at hash "b9fa1895dcaa6d3dd241d6d9340e939ca30fc0946464ec9f205a8cbe738a8080" in container runtime
	I0806 00:14:52.470060   61720 cri.go:218] Removing image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0806 00:14:52.470095   61720 ssh_runner.go:195] Run: which crictl
	I0806 00:14:52.544769   61720 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19373-9606/.minikube/cache/images/amd64/registry.k8s.io/pause_3.2
	I0806 00:14:52.545022   61720 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.20.0" needs transfer: "registry.k8s.io/kube-proxy:v1.20.0" does not exist at hash "10cc881966cfd9287656c2fce1f144625602653d1e8b011487a7a71feb100bdc" in container runtime
	I0806 00:14:52.545061   61720 cri.go:218] Removing image: registry.k8s.io/kube-proxy:v1.20.0
	I0806 00:14:52.545119   61720 ssh_runner.go:195] Run: which crictl
	I0806 00:14:52.557704   61720 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.20.0
	I0806 00:14:52.557713   61720 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.20.0
	I0806 00:14:52.557759   61720 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19373-9606/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.4.13-0
	I0806 00:14:52.575840   61720 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.20.0
	I0806 00:14:52.598803   61720 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.20.0
	I0806 00:14:52.633351   61720 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19373-9606/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.20.0
	I0806 00:14:52.633411   61720 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19373-9606/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.20.0
	I0806 00:14:52.640820   61720 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/coredns:1.7.0
	I0806 00:14:52.664322   61720 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.20.0" needs transfer: "registry.k8s.io/kube-apiserver:v1.20.0" does not exist at hash "ca9843d3b545457f24b012d6d579ba85f132f2406aa171ad84d53caa55e5de99" in container runtime
	I0806 00:14:52.664375   61720 cri.go:218] Removing image: registry.k8s.io/kube-apiserver:v1.20.0
	I0806 00:14:52.664433   61720 ssh_runner.go:195] Run: which crictl
	I0806 00:14:52.686664   61720 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.20.0" needs transfer: "registry.k8s.io/kube-scheduler:v1.20.0" does not exist at hash "3138b6e3d471224fd516f758f3b53309219bcb6824e07686b3cd60d78012c899" in container runtime
	I0806 00:14:52.686704   61720 cri.go:218] Removing image: registry.k8s.io/kube-scheduler:v1.20.0
	I0806 00:14:52.686751   61720 ssh_runner.go:195] Run: which crictl
	I0806 00:14:52.710267   61720 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.20.0
	I0806 00:14:52.710288   61720 cache_images.go:116] "registry.k8s.io/coredns:1.7.0" needs transfer: "registry.k8s.io/coredns:1.7.0" does not exist at hash "bfe3a36ebd2528b454be6aebece806db5b40407b833e2af9617bf39afaff8c16" in container runtime
	I0806 00:14:52.710295   61720 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.20.0
	I0806 00:14:52.710322   61720 cri.go:218] Removing image: registry.k8s.io/coredns:1.7.0
	I0806 00:14:52.710348   61720 ssh_runner.go:195] Run: which crictl
	I0806 00:14:52.753209   61720 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.7.0
	I0806 00:14:52.773045   61720 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19373-9606/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.20.0
	I0806 00:14:52.773045   61720 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19373-9606/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.20.0
	I0806 00:14:52.794936   61720 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19373-9606/.minikube/cache/images/amd64/registry.k8s.io/coredns_1.7.0
	I0806 00:14:53.168413   61720 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I0806 00:14:53.311791   61720 cache_images.go:92] duration metric: took 1.083360411s to LoadCachedImages
	W0806 00:14:53.311894   61720 out.go:239] X Unable to load cached images: LoadCachedImages: stat /home/jenkins/minikube-integration/19373-9606/.minikube/cache/images/amd64/registry.k8s.io/pause_3.2: no such file or directory
	I0806 00:14:53.311912   61720 kubeadm.go:934] updating node { 192.168.72.112 8443 v1.20.0 crio true true} ...
	I0806 00:14:53.312034   61720 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.20.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime=remote --container-runtime-endpoint=unix:///var/run/crio/crio.sock --hostname-override=kubernetes-upgrade-907863 --kubeconfig=/etc/kubernetes/kubelet.conf --network-plugin=cni --node-ip=192.168.72.112
	
	[Install]
	 config:
	{KubernetesVersion:v1.20.0 ClusterName:kubernetes-upgrade-907863 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0806 00:14:53.312108   61720 ssh_runner.go:195] Run: crio config
	I0806 00:14:53.380642   61720 cni.go:84] Creating CNI manager for ""
	I0806 00:14:53.380662   61720 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0806 00:14:53.380674   61720 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0806 00:14:53.380698   61720 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.72.112 APIServerPort:8443 KubernetesVersion:v1.20.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:kubernetes-upgrade-907863 NodeName:kubernetes-upgrade-907863 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.72.112"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.72.112 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.
crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:false}
	I0806 00:14:53.380923   61720 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.72.112
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: /var/run/crio/crio.sock
	  name: "kubernetes-upgrade-907863"
	  kubeletExtraArgs:
	    node-ip: 192.168.72.112
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.72.112"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	dns:
	  type: CoreDNS
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.20.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0806 00:14:53.380997   61720 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.20.0
	I0806 00:14:53.395339   61720 binaries.go:44] Found k8s binaries, skipping transfer
	I0806 00:14:53.395423   61720 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0806 00:14:53.411555   61720 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (433 bytes)
	I0806 00:14:53.433132   61720 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0806 00:14:53.455825   61720 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2126 bytes)
	I0806 00:14:53.476294   61720 ssh_runner.go:195] Run: grep 192.168.72.112	control-plane.minikube.internal$ /etc/hosts
	I0806 00:14:53.480668   61720 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.72.112	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0806 00:14:53.499600   61720 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0806 00:14:53.652974   61720 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0806 00:14:53.677860   61720 certs.go:68] Setting up /home/jenkins/minikube-integration/19373-9606/.minikube/profiles/kubernetes-upgrade-907863 for IP: 192.168.72.112
	I0806 00:14:53.677891   61720 certs.go:194] generating shared ca certs ...
	I0806 00:14:53.677911   61720 certs.go:226] acquiring lock for ca certs: {Name:mkf35a042c1656d191f542eee7fa087aad4d29d8 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0806 00:14:53.678068   61720 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19373-9606/.minikube/ca.key
	I0806 00:14:53.678134   61720 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19373-9606/.minikube/proxy-client-ca.key
	I0806 00:14:53.678149   61720 certs.go:256] generating profile certs ...
	I0806 00:14:53.678226   61720 certs.go:363] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/19373-9606/.minikube/profiles/kubernetes-upgrade-907863/client.key
	I0806 00:14:53.678247   61720 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19373-9606/.minikube/profiles/kubernetes-upgrade-907863/client.crt with IP's: []
	I0806 00:14:53.891591   61720 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19373-9606/.minikube/profiles/kubernetes-upgrade-907863/client.crt ...
	I0806 00:14:53.891629   61720 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19373-9606/.minikube/profiles/kubernetes-upgrade-907863/client.crt: {Name:mka73080179836a3e5f00f6563ab46864f07d0b2 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0806 00:14:53.891808   61720 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19373-9606/.minikube/profiles/kubernetes-upgrade-907863/client.key ...
	I0806 00:14:53.891824   61720 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19373-9606/.minikube/profiles/kubernetes-upgrade-907863/client.key: {Name:mka33cfcfc39b86c3df16be006a98c42ce1b23f2 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0806 00:14:53.891911   61720 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/19373-9606/.minikube/profiles/kubernetes-upgrade-907863/apiserver.key.777d71ca
	I0806 00:14:53.891933   61720 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19373-9606/.minikube/profiles/kubernetes-upgrade-907863/apiserver.crt.777d71ca with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.72.112]
	I0806 00:14:54.037095   61720 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19373-9606/.minikube/profiles/kubernetes-upgrade-907863/apiserver.crt.777d71ca ...
	I0806 00:14:54.037146   61720 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19373-9606/.minikube/profiles/kubernetes-upgrade-907863/apiserver.crt.777d71ca: {Name:mkdbd1ad9bf1e099ce927cbbd16ee9537c57abec Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0806 00:14:54.037338   61720 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19373-9606/.minikube/profiles/kubernetes-upgrade-907863/apiserver.key.777d71ca ...
	I0806 00:14:54.037353   61720 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19373-9606/.minikube/profiles/kubernetes-upgrade-907863/apiserver.key.777d71ca: {Name:mke232de9779080cad9e9caed41be9d6d22833d6 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0806 00:14:54.037428   61720 certs.go:381] copying /home/jenkins/minikube-integration/19373-9606/.minikube/profiles/kubernetes-upgrade-907863/apiserver.crt.777d71ca -> /home/jenkins/minikube-integration/19373-9606/.minikube/profiles/kubernetes-upgrade-907863/apiserver.crt
	I0806 00:14:54.037527   61720 certs.go:385] copying /home/jenkins/minikube-integration/19373-9606/.minikube/profiles/kubernetes-upgrade-907863/apiserver.key.777d71ca -> /home/jenkins/minikube-integration/19373-9606/.minikube/profiles/kubernetes-upgrade-907863/apiserver.key
	I0806 00:14:54.037593   61720 certs.go:363] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/19373-9606/.minikube/profiles/kubernetes-upgrade-907863/proxy-client.key
	I0806 00:14:54.037611   61720 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19373-9606/.minikube/profiles/kubernetes-upgrade-907863/proxy-client.crt with IP's: []
	I0806 00:14:54.104925   61720 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19373-9606/.minikube/profiles/kubernetes-upgrade-907863/proxy-client.crt ...
	I0806 00:14:54.104968   61720 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19373-9606/.minikube/profiles/kubernetes-upgrade-907863/proxy-client.crt: {Name:mk963f01277aaeaa47218702211ab49a2a05b2d7 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0806 00:14:54.158476   61720 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19373-9606/.minikube/profiles/kubernetes-upgrade-907863/proxy-client.key ...
	I0806 00:14:54.158516   61720 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19373-9606/.minikube/profiles/kubernetes-upgrade-907863/proxy-client.key: {Name:mk3b859fbef7364d8f865e5e69cf276e01b899be Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0806 00:14:54.158797   61720 certs.go:484] found cert: /home/jenkins/minikube-integration/19373-9606/.minikube/certs/16792.pem (1338 bytes)
	W0806 00:14:54.158850   61720 certs.go:480] ignoring /home/jenkins/minikube-integration/19373-9606/.minikube/certs/16792_empty.pem, impossibly tiny 0 bytes
	I0806 00:14:54.158864   61720 certs.go:484] found cert: /home/jenkins/minikube-integration/19373-9606/.minikube/certs/ca-key.pem (1679 bytes)
	I0806 00:14:54.158896   61720 certs.go:484] found cert: /home/jenkins/minikube-integration/19373-9606/.minikube/certs/ca.pem (1082 bytes)
	I0806 00:14:54.158952   61720 certs.go:484] found cert: /home/jenkins/minikube-integration/19373-9606/.minikube/certs/cert.pem (1123 bytes)
	I0806 00:14:54.158997   61720 certs.go:484] found cert: /home/jenkins/minikube-integration/19373-9606/.minikube/certs/key.pem (1679 bytes)
	I0806 00:14:54.159081   61720 certs.go:484] found cert: /home/jenkins/minikube-integration/19373-9606/.minikube/files/etc/ssl/certs/167922.pem (1708 bytes)
	I0806 00:14:54.159895   61720 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19373-9606/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0806 00:14:54.189387   61720 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19373-9606/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0806 00:14:54.217878   61720 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19373-9606/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0806 00:14:54.247509   61720 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19373-9606/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0806 00:14:54.276131   61720 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19373-9606/.minikube/profiles/kubernetes-upgrade-907863/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1436 bytes)
	I0806 00:14:54.306893   61720 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19373-9606/.minikube/profiles/kubernetes-upgrade-907863/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0806 00:14:54.334293   61720 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19373-9606/.minikube/profiles/kubernetes-upgrade-907863/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0806 00:14:54.362197   61720 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19373-9606/.minikube/profiles/kubernetes-upgrade-907863/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0806 00:14:54.392159   61720 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19373-9606/.minikube/files/etc/ssl/certs/167922.pem --> /usr/share/ca-certificates/167922.pem (1708 bytes)
	I0806 00:14:54.420680   61720 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19373-9606/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0806 00:14:54.456455   61720 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19373-9606/.minikube/certs/16792.pem --> /usr/share/ca-certificates/16792.pem (1338 bytes)
	I0806 00:14:54.486139   61720 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0806 00:14:54.504953   61720 ssh_runner.go:195] Run: openssl version
	I0806 00:14:54.511853   61720 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0806 00:14:54.524862   61720 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0806 00:14:54.529585   61720 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Aug  5 22:50 /usr/share/ca-certificates/minikubeCA.pem
	I0806 00:14:54.529642   61720 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0806 00:14:54.535690   61720 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0806 00:14:54.550055   61720 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/16792.pem && ln -fs /usr/share/ca-certificates/16792.pem /etc/ssl/certs/16792.pem"
	I0806 00:14:54.572183   61720 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/16792.pem
	I0806 00:14:54.582553   61720 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Aug  5 23:03 /usr/share/ca-certificates/16792.pem
	I0806 00:14:54.582619   61720 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/16792.pem
	I0806 00:14:54.590967   61720 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/16792.pem /etc/ssl/certs/51391683.0"
	I0806 00:14:54.615850   61720 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/167922.pem && ln -fs /usr/share/ca-certificates/167922.pem /etc/ssl/certs/167922.pem"
	I0806 00:14:54.636139   61720 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/167922.pem
	I0806 00:14:54.641803   61720 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Aug  5 23:03 /usr/share/ca-certificates/167922.pem
	I0806 00:14:54.641870   61720 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/167922.pem
	I0806 00:14:54.648958   61720 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/167922.pem /etc/ssl/certs/3ec20f2e.0"
	I0806 00:14:54.664107   61720 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0806 00:14:54.669336   61720 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0806 00:14:54.669395   61720 kubeadm.go:392] StartCluster: {Name:kubernetes-upgrade-907863 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19339/minikube-v1.33.1-1722248113-19339-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersi
on:v1.20.0 ClusterName:kubernetes-upgrade-907863 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.72.112 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizat
ions:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0806 00:14:54.669544   61720 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0806 00:14:54.669609   61720 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0806 00:14:54.719934   61720 cri.go:89] found id: ""
	I0806 00:14:54.719996   61720 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0806 00:14:54.732226   61720 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0806 00:14:54.743958   61720 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0806 00:14:54.754033   61720 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0806 00:14:54.754060   61720 kubeadm.go:157] found existing configuration files:
	
	I0806 00:14:54.754116   61720 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0806 00:14:54.763793   61720 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0806 00:14:54.763871   61720 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0806 00:14:54.774255   61720 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0806 00:14:54.784427   61720 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0806 00:14:54.784499   61720 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0806 00:14:54.796822   61720 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0806 00:14:54.807691   61720 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0806 00:14:54.807751   61720 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0806 00:14:54.818222   61720 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0806 00:14:54.830068   61720 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0806 00:14:54.830140   61720 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0806 00:14:54.841016   61720 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0806 00:14:55.148580   61720 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0806 00:14:56.850785   62278 main.go:134] libmachine: (stopped-upgrade-936666) DBG | domain stopped-upgrade-936666 has defined MAC address 52:54:00:7b:85:58 in network mk-stopped-upgrade-936666
	I0806 00:14:56.851346   62278 main.go:134] libmachine: (stopped-upgrade-936666) DBG | unable to find current IP address of domain stopped-upgrade-936666 in network mk-stopped-upgrade-936666
	I0806 00:14:56.851372   62278 main.go:134] libmachine: (stopped-upgrade-936666) DBG | I0806 00:14:56.851297   62316 retry.go:31] will retry after 460.233406ms: waiting for machine to come up
	I0806 00:14:57.312810   62278 main.go:134] libmachine: (stopped-upgrade-936666) DBG | domain stopped-upgrade-936666 has defined MAC address 52:54:00:7b:85:58 in network mk-stopped-upgrade-936666
	I0806 00:14:57.313343   62278 main.go:134] libmachine: (stopped-upgrade-936666) DBG | unable to find current IP address of domain stopped-upgrade-936666 in network mk-stopped-upgrade-936666
	I0806 00:14:57.313368   62278 main.go:134] libmachine: (stopped-upgrade-936666) DBG | I0806 00:14:57.313274   62316 retry.go:31] will retry after 673.92191ms: waiting for machine to come up
	I0806 00:14:57.988696   62278 main.go:134] libmachine: (stopped-upgrade-936666) DBG | domain stopped-upgrade-936666 has defined MAC address 52:54:00:7b:85:58 in network mk-stopped-upgrade-936666
	I0806 00:14:57.989268   62278 main.go:134] libmachine: (stopped-upgrade-936666) DBG | unable to find current IP address of domain stopped-upgrade-936666 in network mk-stopped-upgrade-936666
	I0806 00:14:57.989294   62278 main.go:134] libmachine: (stopped-upgrade-936666) DBG | I0806 00:14:57.989206   62316 retry.go:31] will retry after 742.239606ms: waiting for machine to come up
	I0806 00:14:58.733669   62278 main.go:134] libmachine: (stopped-upgrade-936666) DBG | domain stopped-upgrade-936666 has defined MAC address 52:54:00:7b:85:58 in network mk-stopped-upgrade-936666
	I0806 00:14:58.734242   62278 main.go:134] libmachine: (stopped-upgrade-936666) DBG | unable to find current IP address of domain stopped-upgrade-936666 in network mk-stopped-upgrade-936666
	I0806 00:14:58.734267   62278 main.go:134] libmachine: (stopped-upgrade-936666) DBG | I0806 00:14:58.734186   62316 retry.go:31] will retry after 1.085265631s: waiting for machine to come up
	I0806 00:14:59.821563   62278 main.go:134] libmachine: (stopped-upgrade-936666) DBG | domain stopped-upgrade-936666 has defined MAC address 52:54:00:7b:85:58 in network mk-stopped-upgrade-936666
	I0806 00:14:59.822095   62278 main.go:134] libmachine: (stopped-upgrade-936666) DBG | unable to find current IP address of domain stopped-upgrade-936666 in network mk-stopped-upgrade-936666
	I0806 00:14:59.822120   62278 main.go:134] libmachine: (stopped-upgrade-936666) DBG | I0806 00:14:59.822056   62316 retry.go:31] will retry after 1.312616827s: waiting for machine to come up
	I0806 00:15:01.136328   62278 main.go:134] libmachine: (stopped-upgrade-936666) DBG | domain stopped-upgrade-936666 has defined MAC address 52:54:00:7b:85:58 in network mk-stopped-upgrade-936666
	I0806 00:15:01.136818   62278 main.go:134] libmachine: (stopped-upgrade-936666) DBG | unable to find current IP address of domain stopped-upgrade-936666 in network mk-stopped-upgrade-936666
	I0806 00:15:01.136861   62278 main.go:134] libmachine: (stopped-upgrade-936666) DBG | I0806 00:15:01.136762   62316 retry.go:31] will retry after 1.457249872s: waiting for machine to come up
	I0806 00:14:56.879774   62044 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19373-9606/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0806 00:14:56.952696   62044 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19373-9606/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0806 00:14:57.016032   62044 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19373-9606/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0806 00:14:57.114988   62044 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19373-9606/.minikube/profiles/pause-161508/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1419 bytes)
	I0806 00:14:57.176152   62044 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19373-9606/.minikube/profiles/pause-161508/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0806 00:14:57.210978   62044 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19373-9606/.minikube/profiles/pause-161508/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0806 00:14:57.252853   62044 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19373-9606/.minikube/profiles/pause-161508/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0806 00:14:57.316820   62044 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19373-9606/.minikube/certs/16792.pem --> /usr/share/ca-certificates/16792.pem (1338 bytes)
	I0806 00:14:57.363002   62044 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19373-9606/.minikube/files/etc/ssl/certs/167922.pem --> /usr/share/ca-certificates/167922.pem (1708 bytes)
	I0806 00:14:57.399698   62044 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19373-9606/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0806 00:14:57.432814   62044 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0806 00:14:57.453941   62044 ssh_runner.go:195] Run: openssl version
	I0806 00:14:57.464156   62044 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/16792.pem && ln -fs /usr/share/ca-certificates/16792.pem /etc/ssl/certs/16792.pem"
	I0806 00:14:57.489040   62044 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/16792.pem
	I0806 00:14:57.494783   62044 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Aug  5 23:03 /usr/share/ca-certificates/16792.pem
	I0806 00:14:57.494877   62044 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/16792.pem
	I0806 00:14:57.504614   62044 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/16792.pem /etc/ssl/certs/51391683.0"
	I0806 00:14:57.517885   62044 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/167922.pem && ln -fs /usr/share/ca-certificates/167922.pem /etc/ssl/certs/167922.pem"
	I0806 00:14:57.532455   62044 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/167922.pem
	I0806 00:14:57.538611   62044 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Aug  5 23:03 /usr/share/ca-certificates/167922.pem
	I0806 00:14:57.538681   62044 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/167922.pem
	I0806 00:14:57.548094   62044 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/167922.pem /etc/ssl/certs/3ec20f2e.0"
	I0806 00:14:57.563012   62044 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0806 00:14:57.580706   62044 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0806 00:14:57.587499   62044 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Aug  5 22:50 /usr/share/ca-certificates/minikubeCA.pem
	I0806 00:14:57.587569   62044 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0806 00:14:57.600755   62044 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0806 00:14:57.617274   62044 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0806 00:14:57.625073   62044 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0806 00:14:57.633761   62044 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0806 00:14:57.642962   62044 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0806 00:14:57.651893   62044 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0806 00:14:57.660085   62044 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0806 00:14:57.675488   62044 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0806 00:14:57.683789   62044 kubeadm.go:392] StartCluster: {Name:pause-161508 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19339/minikube-v1.33.1-1722248113-19339-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2048 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.3 Cl
usterName:pause-161508 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.118 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false o
lm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0806 00:14:57.683936   62044 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0806 00:14:57.684025   62044 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0806 00:14:57.779721   62044 cri.go:89] found id: "7d8cf53ea71f671cd11c77d76585125000808e1e5e9dbdf057515fae3694c8c2"
	I0806 00:14:57.779749   62044 cri.go:89] found id: "6c3e3869967dcdea9538e99cfba9fa7cbeab8604b70330171ff36214ad65dc4f"
	I0806 00:14:57.779757   62044 cri.go:89] found id: "b5f13fe4c6e99948bd3db06aa7e20e2aa8073f836fe73e27f62926299efa70db"
	I0806 00:14:57.779765   62044 cri.go:89] found id: "1bf2df2d254dca2dd27d3eae24da873f45a9ff1fbdfc0ea1dd1a35201bcd069a"
	I0806 00:14:57.779771   62044 cri.go:89] found id: "e7bde654f01ecd95054cba7e1831b15349cfc28b44f4f1a6722bec18d022099a"
	I0806 00:14:57.779776   62044 cri.go:89] found id: "6471bcdcb4ee5e45f9f8c1500088cb267ab957b707b6c9091e097c704b2d66d6"
	I0806 00:14:57.779780   62044 cri.go:89] found id: "bfaba2e9c5b00ff3bf65111355285eff0b912f5fc7bfb869f50fb2fffad3292c"
	I0806 00:14:57.779785   62044 cri.go:89] found id: "97903d796b6207952efa4d432caf2c3e60811379a89eae5fb77e2fa8c1a1d028"
	I0806 00:14:57.779790   62044 cri.go:89] found id: "895560f466b423fe1dfc2c8b3564008271d04a68b72ddc661ae492d8d6fe1900"
	I0806 00:14:57.779799   62044 cri.go:89] found id: "675d1cd5f51ab58fac223676eede1d4e46868c8e294ae5a521cd08300f62038b"
	I0806 00:14:57.779804   62044 cri.go:89] found id: ""
	I0806 00:14:57.779859   62044 ssh_runner.go:195] Run: sudo runc list -f json
	
	
	==> CRI-O <==
	Aug 06 00:15:36 pause-161508 crio[2471]: time="2024-08-06 00:15:36.861020815Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1722903336860983962,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:124365,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=8c33a0b5-0378-44dc-b789-e598c92fc1b2 name=/runtime.v1.ImageService/ImageFsInfo
	Aug 06 00:15:36 pause-161508 crio[2471]: time="2024-08-06 00:15:36.861624864Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=6b7ede17-a164-44ab-b45f-31c506080e6e name=/runtime.v1.RuntimeService/ListContainers
	Aug 06 00:15:36 pause-161508 crio[2471]: time="2024-08-06 00:15:36.861688192Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=6b7ede17-a164-44ab-b45f-31c506080e6e name=/runtime.v1.RuntimeService/ListContainers
	Aug 06 00:15:36 pause-161508 crio[2471]: time="2024-08-06 00:15:36.861931554Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:bb38fda641e398e7269c4fc98840654d4ef417ccc04c0dbf6c34580362b741dc,PodSandboxId:11fed89ca356a76abf9f5cf4a8cb9b1d34a89a2c434ff78a4f706070f378a78c,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:2,},Image:&ImageSpec{Image:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,State:CONTAINER_RUNNING,CreatedAt:1722903317683439547,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-55wbx,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d90e043a-0525-4e59-9712-70116590d766,},Annotations:map[string]string{io.kubernetes.container.hash: acb1bb23,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessageP
ath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4086fd17ccf0a3abca003e8a74c3e9407ee2b4f844d50f018f01889b004f2e72,PodSandboxId:777385c422e42d154fb7a8bb5b55b02aecb6d77ebfca355ae637275547f7ae8a,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1722903313913286196,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-pause-161508,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 515bfc503159b4bbe954e027b35cf1cb,},Annotations:map[string]string{io.kubernetes.container.hash: 574d5a6a,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.containe
r.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:982476c4266b39f507a2b02b008aa89568d49d4e23c11d16111623b19660630c,PodSandboxId:b9198d20e0c75cff4e61b5ff0ad932276cd4bd88de410bc9dbe4420f7e14b591,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,State:CONTAINER_RUNNING,CreatedAt:1722903313892238688,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-pause-161508,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e677cb0bf72cff2cfe647e5180a645c6,},Annotations:map[string]string{io.kubernetes.container.hash: 7337c8d9,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessa
gePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8549cef6ca6f2186a15e55ba9b40db7f6b2948b5ae1430b198aaf36324fe4d12,PodSandboxId:9c50be63bb0e17758fb1fc280928e9a5bdd051b8a4babb033e39846cb22d746b,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,State:CONTAINER_RUNNING,CreatedAt:1722903313862725842,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-pause-161508,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e853443f8265426dc355b3c076e12bba,},Annotations:map[string]string{io.kubernetes.container.hash: bcb4918f,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.termina
tionMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:bc6df04b5cb9b90f3374c1e2cd15ec1fb3a0df999fa901662eecfe2bb3d6ee58,PodSandboxId:8c802c9490a1a015c30e657e438b732d323d9ebadf946c75fe8583444defe9d3,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,State:CONTAINER_RUNNING,CreatedAt:1722903313860213377,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-pause-161508,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: aa0ba9a109192f9bf83e28dceb8ed1ab,},Annotations:map[string]string{io.kubernetes.container.hash: bf72a8bf,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy:
File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d7a29ddb2d7a6b8db6a21aa6442f10a220f961e45a0453bef7e140494e61f546,PodSandboxId:0a9567f716680b7eac2daf2c025fc1a51bb9618cc918b6ec21eedb02307b2a2b,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1722903297750733768,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-9wwqk,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 111220a5-a088-4652-a1a3-284f2d1b111b,},Annotations:map[string]string{io.kubernetes.container.hash: 227892e4,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"contain
erPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:adcecbbd6a938c51103d7edc01cd0855e22c469f90e20bf3e4a76fbd715a4744,PodSandboxId:11fed89ca356a76abf9f5cf4a8cb9b1d34a89a2c434ff78a4f706070f378a78c,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,State:CONTAINER_EXITED,CreatedAt:1722903296535177581,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-55wbx,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d90e043a-0525-4e59-9712-70116590d766,},Annotations:map[string]string{io.kubernetes.container.hash: acb1bb
23,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7d8cf53ea71f671cd11c77d76585125000808e1e5e9dbdf057515fae3694c8c2,PodSandboxId:b9198d20e0c75cff4e61b5ff0ad932276cd4bd88de410bc9dbe4420f7e14b591,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,State:CONTAINER_EXITED,CreatedAt:1722903296536992644,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-pause-161508,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e677cb0bf72cff2cfe647e5180a645c6,},Annotations:map[string]string{io.kubernetes.container.hash: 7337c8d9,io.kubernetes.co
ntainer.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6c3e3869967dcdea9538e99cfba9fa7cbeab8604b70330171ff36214ad65dc4f,PodSandboxId:777385c422e42d154fb7a8bb5b55b02aecb6d77ebfca355ae637275547f7ae8a,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_EXITED,CreatedAt:1722903296438781418,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-pause-161508,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 515bfc503159b4bbe954e027b35cf1cb,},Annotations:map[string]string{io.kubernetes.container.hash: 574d5a6a,io.kubernetes.container.restartCount: 1,io.kubernetes.container.t
erminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1bf2df2d254dca2dd27d3eae24da873f45a9ff1fbdfc0ea1dd1a35201bcd069a,PodSandboxId:9c50be63bb0e17758fb1fc280928e9a5bdd051b8a4babb033e39846cb22d746b,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,State:CONTAINER_EXITED,CreatedAt:1722903296303133201,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-pause-161508,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e853443f8265426dc355b3c076e12bba,},Annotations:map[string]string{io.kubernetes.container.hash: bcb4918f,io.kubernetes.container.restartCount: 1,io.kubernetes.con
tainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b5f13fe4c6e99948bd3db06aa7e20e2aa8073f836fe73e27f62926299efa70db,PodSandboxId:8c802c9490a1a015c30e657e438b732d323d9ebadf946c75fe8583444defe9d3,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,State:CONTAINER_EXITED,CreatedAt:1722903296335835381,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-pause-161508,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: aa0ba9a109192f9bf83e28dceb8ed1ab,},Annotations:map[string]string{io.kubernetes.container.hash: bf72a8bf,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationM
essagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e7bde654f01ecd95054cba7e1831b15349cfc28b44f4f1a6722bec18d022099a,PodSandboxId:cdbab9ce1e914d71878d039e4d5f1059541433a0180f911897309405ae8b389a,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1722903240464916282,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-9wwqk,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 111220a5-a088-4652-a1a3-284f2d1b111b,},Annotations:map[string]string{io.kubernetes.container.hash: 227892e4,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns
-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=6b7ede17-a164-44ab-b45f-31c506080e6e name=/runtime.v1.RuntimeService/ListContainers
	Aug 06 00:15:36 pause-161508 crio[2471]: time="2024-08-06 00:15:36.911777808Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=0aec24d7-5351-4186-8a11-f5b0b467b13e name=/runtime.v1.RuntimeService/Version
	Aug 06 00:15:36 pause-161508 crio[2471]: time="2024-08-06 00:15:36.911873512Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=0aec24d7-5351-4186-8a11-f5b0b467b13e name=/runtime.v1.RuntimeService/Version
	Aug 06 00:15:36 pause-161508 crio[2471]: time="2024-08-06 00:15:36.913211601Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=88308abc-eb4b-4407-97e7-106e91683c57 name=/runtime.v1.ImageService/ImageFsInfo
	Aug 06 00:15:36 pause-161508 crio[2471]: time="2024-08-06 00:15:36.914034610Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1722903336913910263,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:124365,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=88308abc-eb4b-4407-97e7-106e91683c57 name=/runtime.v1.ImageService/ImageFsInfo
	Aug 06 00:15:36 pause-161508 crio[2471]: time="2024-08-06 00:15:36.916835149Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=1c1e8eb3-9582-4a72-844b-adbf19f84f68 name=/runtime.v1.RuntimeService/ListContainers
	Aug 06 00:15:36 pause-161508 crio[2471]: time="2024-08-06 00:15:36.916913634Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=1c1e8eb3-9582-4a72-844b-adbf19f84f68 name=/runtime.v1.RuntimeService/ListContainers
	Aug 06 00:15:36 pause-161508 crio[2471]: time="2024-08-06 00:15:36.917278078Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:bb38fda641e398e7269c4fc98840654d4ef417ccc04c0dbf6c34580362b741dc,PodSandboxId:11fed89ca356a76abf9f5cf4a8cb9b1d34a89a2c434ff78a4f706070f378a78c,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:2,},Image:&ImageSpec{Image:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,State:CONTAINER_RUNNING,CreatedAt:1722903317683439547,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-55wbx,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d90e043a-0525-4e59-9712-70116590d766,},Annotations:map[string]string{io.kubernetes.container.hash: acb1bb23,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessageP
ath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4086fd17ccf0a3abca003e8a74c3e9407ee2b4f844d50f018f01889b004f2e72,PodSandboxId:777385c422e42d154fb7a8bb5b55b02aecb6d77ebfca355ae637275547f7ae8a,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1722903313913286196,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-pause-161508,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 515bfc503159b4bbe954e027b35cf1cb,},Annotations:map[string]string{io.kubernetes.container.hash: 574d5a6a,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.containe
r.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:982476c4266b39f507a2b02b008aa89568d49d4e23c11d16111623b19660630c,PodSandboxId:b9198d20e0c75cff4e61b5ff0ad932276cd4bd88de410bc9dbe4420f7e14b591,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,State:CONTAINER_RUNNING,CreatedAt:1722903313892238688,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-pause-161508,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e677cb0bf72cff2cfe647e5180a645c6,},Annotations:map[string]string{io.kubernetes.container.hash: 7337c8d9,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessa
gePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8549cef6ca6f2186a15e55ba9b40db7f6b2948b5ae1430b198aaf36324fe4d12,PodSandboxId:9c50be63bb0e17758fb1fc280928e9a5bdd051b8a4babb033e39846cb22d746b,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,State:CONTAINER_RUNNING,CreatedAt:1722903313862725842,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-pause-161508,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e853443f8265426dc355b3c076e12bba,},Annotations:map[string]string{io.kubernetes.container.hash: bcb4918f,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.termina
tionMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:bc6df04b5cb9b90f3374c1e2cd15ec1fb3a0df999fa901662eecfe2bb3d6ee58,PodSandboxId:8c802c9490a1a015c30e657e438b732d323d9ebadf946c75fe8583444defe9d3,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,State:CONTAINER_RUNNING,CreatedAt:1722903313860213377,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-pause-161508,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: aa0ba9a109192f9bf83e28dceb8ed1ab,},Annotations:map[string]string{io.kubernetes.container.hash: bf72a8bf,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy:
File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d7a29ddb2d7a6b8db6a21aa6442f10a220f961e45a0453bef7e140494e61f546,PodSandboxId:0a9567f716680b7eac2daf2c025fc1a51bb9618cc918b6ec21eedb02307b2a2b,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1722903297750733768,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-9wwqk,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 111220a5-a088-4652-a1a3-284f2d1b111b,},Annotations:map[string]string{io.kubernetes.container.hash: 227892e4,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"contain
erPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:adcecbbd6a938c51103d7edc01cd0855e22c469f90e20bf3e4a76fbd715a4744,PodSandboxId:11fed89ca356a76abf9f5cf4a8cb9b1d34a89a2c434ff78a4f706070f378a78c,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,State:CONTAINER_EXITED,CreatedAt:1722903296535177581,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-55wbx,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d90e043a-0525-4e59-9712-70116590d766,},Annotations:map[string]string{io.kubernetes.container.hash: acb1bb
23,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7d8cf53ea71f671cd11c77d76585125000808e1e5e9dbdf057515fae3694c8c2,PodSandboxId:b9198d20e0c75cff4e61b5ff0ad932276cd4bd88de410bc9dbe4420f7e14b591,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,State:CONTAINER_EXITED,CreatedAt:1722903296536992644,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-pause-161508,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e677cb0bf72cff2cfe647e5180a645c6,},Annotations:map[string]string{io.kubernetes.container.hash: 7337c8d9,io.kubernetes.co
ntainer.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6c3e3869967dcdea9538e99cfba9fa7cbeab8604b70330171ff36214ad65dc4f,PodSandboxId:777385c422e42d154fb7a8bb5b55b02aecb6d77ebfca355ae637275547f7ae8a,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_EXITED,CreatedAt:1722903296438781418,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-pause-161508,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 515bfc503159b4bbe954e027b35cf1cb,},Annotations:map[string]string{io.kubernetes.container.hash: 574d5a6a,io.kubernetes.container.restartCount: 1,io.kubernetes.container.t
erminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1bf2df2d254dca2dd27d3eae24da873f45a9ff1fbdfc0ea1dd1a35201bcd069a,PodSandboxId:9c50be63bb0e17758fb1fc280928e9a5bdd051b8a4babb033e39846cb22d746b,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,State:CONTAINER_EXITED,CreatedAt:1722903296303133201,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-pause-161508,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e853443f8265426dc355b3c076e12bba,},Annotations:map[string]string{io.kubernetes.container.hash: bcb4918f,io.kubernetes.container.restartCount: 1,io.kubernetes.con
tainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b5f13fe4c6e99948bd3db06aa7e20e2aa8073f836fe73e27f62926299efa70db,PodSandboxId:8c802c9490a1a015c30e657e438b732d323d9ebadf946c75fe8583444defe9d3,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,State:CONTAINER_EXITED,CreatedAt:1722903296335835381,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-pause-161508,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: aa0ba9a109192f9bf83e28dceb8ed1ab,},Annotations:map[string]string{io.kubernetes.container.hash: bf72a8bf,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationM
essagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e7bde654f01ecd95054cba7e1831b15349cfc28b44f4f1a6722bec18d022099a,PodSandboxId:cdbab9ce1e914d71878d039e4d5f1059541433a0180f911897309405ae8b389a,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1722903240464916282,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-9wwqk,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 111220a5-a088-4652-a1a3-284f2d1b111b,},Annotations:map[string]string{io.kubernetes.container.hash: 227892e4,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns
-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=1c1e8eb3-9582-4a72-844b-adbf19f84f68 name=/runtime.v1.RuntimeService/ListContainers
	Aug 06 00:15:36 pause-161508 crio[2471]: time="2024-08-06 00:15:36.962970381Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=1054de6d-6a68-428e-b3f5-b47d8a57b90d name=/runtime.v1.RuntimeService/Version
	Aug 06 00:15:36 pause-161508 crio[2471]: time="2024-08-06 00:15:36.963043482Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=1054de6d-6a68-428e-b3f5-b47d8a57b90d name=/runtime.v1.RuntimeService/Version
	Aug 06 00:15:36 pause-161508 crio[2471]: time="2024-08-06 00:15:36.964172090Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=9755cee8-62f3-47b6-b0d4-c64f7382a051 name=/runtime.v1.ImageService/ImageFsInfo
	Aug 06 00:15:36 pause-161508 crio[2471]: time="2024-08-06 00:15:36.964772514Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1722903336964724114,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:124365,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=9755cee8-62f3-47b6-b0d4-c64f7382a051 name=/runtime.v1.ImageService/ImageFsInfo
	Aug 06 00:15:36 pause-161508 crio[2471]: time="2024-08-06 00:15:36.965256103Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=9bc30cef-2202-41d7-a805-4b62ce2fd6eb name=/runtime.v1.RuntimeService/ListContainers
	Aug 06 00:15:36 pause-161508 crio[2471]: time="2024-08-06 00:15:36.965327417Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=9bc30cef-2202-41d7-a805-4b62ce2fd6eb name=/runtime.v1.RuntimeService/ListContainers
	Aug 06 00:15:36 pause-161508 crio[2471]: time="2024-08-06 00:15:36.965676015Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:bb38fda641e398e7269c4fc98840654d4ef417ccc04c0dbf6c34580362b741dc,PodSandboxId:11fed89ca356a76abf9f5cf4a8cb9b1d34a89a2c434ff78a4f706070f378a78c,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:2,},Image:&ImageSpec{Image:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,State:CONTAINER_RUNNING,CreatedAt:1722903317683439547,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-55wbx,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d90e043a-0525-4e59-9712-70116590d766,},Annotations:map[string]string{io.kubernetes.container.hash: acb1bb23,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessageP
ath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4086fd17ccf0a3abca003e8a74c3e9407ee2b4f844d50f018f01889b004f2e72,PodSandboxId:777385c422e42d154fb7a8bb5b55b02aecb6d77ebfca355ae637275547f7ae8a,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1722903313913286196,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-pause-161508,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 515bfc503159b4bbe954e027b35cf1cb,},Annotations:map[string]string{io.kubernetes.container.hash: 574d5a6a,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.containe
r.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:982476c4266b39f507a2b02b008aa89568d49d4e23c11d16111623b19660630c,PodSandboxId:b9198d20e0c75cff4e61b5ff0ad932276cd4bd88de410bc9dbe4420f7e14b591,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,State:CONTAINER_RUNNING,CreatedAt:1722903313892238688,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-pause-161508,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e677cb0bf72cff2cfe647e5180a645c6,},Annotations:map[string]string{io.kubernetes.container.hash: 7337c8d9,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessa
gePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8549cef6ca6f2186a15e55ba9b40db7f6b2948b5ae1430b198aaf36324fe4d12,PodSandboxId:9c50be63bb0e17758fb1fc280928e9a5bdd051b8a4babb033e39846cb22d746b,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,State:CONTAINER_RUNNING,CreatedAt:1722903313862725842,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-pause-161508,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e853443f8265426dc355b3c076e12bba,},Annotations:map[string]string{io.kubernetes.container.hash: bcb4918f,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.termina
tionMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:bc6df04b5cb9b90f3374c1e2cd15ec1fb3a0df999fa901662eecfe2bb3d6ee58,PodSandboxId:8c802c9490a1a015c30e657e438b732d323d9ebadf946c75fe8583444defe9d3,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,State:CONTAINER_RUNNING,CreatedAt:1722903313860213377,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-pause-161508,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: aa0ba9a109192f9bf83e28dceb8ed1ab,},Annotations:map[string]string{io.kubernetes.container.hash: bf72a8bf,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy:
File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d7a29ddb2d7a6b8db6a21aa6442f10a220f961e45a0453bef7e140494e61f546,PodSandboxId:0a9567f716680b7eac2daf2c025fc1a51bb9618cc918b6ec21eedb02307b2a2b,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1722903297750733768,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-9wwqk,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 111220a5-a088-4652-a1a3-284f2d1b111b,},Annotations:map[string]string{io.kubernetes.container.hash: 227892e4,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"contain
erPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:adcecbbd6a938c51103d7edc01cd0855e22c469f90e20bf3e4a76fbd715a4744,PodSandboxId:11fed89ca356a76abf9f5cf4a8cb9b1d34a89a2c434ff78a4f706070f378a78c,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,State:CONTAINER_EXITED,CreatedAt:1722903296535177581,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-55wbx,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d90e043a-0525-4e59-9712-70116590d766,},Annotations:map[string]string{io.kubernetes.container.hash: acb1bb
23,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7d8cf53ea71f671cd11c77d76585125000808e1e5e9dbdf057515fae3694c8c2,PodSandboxId:b9198d20e0c75cff4e61b5ff0ad932276cd4bd88de410bc9dbe4420f7e14b591,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,State:CONTAINER_EXITED,CreatedAt:1722903296536992644,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-pause-161508,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e677cb0bf72cff2cfe647e5180a645c6,},Annotations:map[string]string{io.kubernetes.container.hash: 7337c8d9,io.kubernetes.co
ntainer.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6c3e3869967dcdea9538e99cfba9fa7cbeab8604b70330171ff36214ad65dc4f,PodSandboxId:777385c422e42d154fb7a8bb5b55b02aecb6d77ebfca355ae637275547f7ae8a,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_EXITED,CreatedAt:1722903296438781418,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-pause-161508,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 515bfc503159b4bbe954e027b35cf1cb,},Annotations:map[string]string{io.kubernetes.container.hash: 574d5a6a,io.kubernetes.container.restartCount: 1,io.kubernetes.container.t
erminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1bf2df2d254dca2dd27d3eae24da873f45a9ff1fbdfc0ea1dd1a35201bcd069a,PodSandboxId:9c50be63bb0e17758fb1fc280928e9a5bdd051b8a4babb033e39846cb22d746b,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,State:CONTAINER_EXITED,CreatedAt:1722903296303133201,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-pause-161508,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e853443f8265426dc355b3c076e12bba,},Annotations:map[string]string{io.kubernetes.container.hash: bcb4918f,io.kubernetes.container.restartCount: 1,io.kubernetes.con
tainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b5f13fe4c6e99948bd3db06aa7e20e2aa8073f836fe73e27f62926299efa70db,PodSandboxId:8c802c9490a1a015c30e657e438b732d323d9ebadf946c75fe8583444defe9d3,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,State:CONTAINER_EXITED,CreatedAt:1722903296335835381,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-pause-161508,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: aa0ba9a109192f9bf83e28dceb8ed1ab,},Annotations:map[string]string{io.kubernetes.container.hash: bf72a8bf,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationM
essagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e7bde654f01ecd95054cba7e1831b15349cfc28b44f4f1a6722bec18d022099a,PodSandboxId:cdbab9ce1e914d71878d039e4d5f1059541433a0180f911897309405ae8b389a,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1722903240464916282,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-9wwqk,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 111220a5-a088-4652-a1a3-284f2d1b111b,},Annotations:map[string]string{io.kubernetes.container.hash: 227892e4,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns
-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=9bc30cef-2202-41d7-a805-4b62ce2fd6eb name=/runtime.v1.RuntimeService/ListContainers
	Aug 06 00:15:37 pause-161508 crio[2471]: time="2024-08-06 00:15:37.011277529Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=5b257d0b-6e1a-4916-a309-84979919b252 name=/runtime.v1.RuntimeService/Version
	Aug 06 00:15:37 pause-161508 crio[2471]: time="2024-08-06 00:15:37.011381899Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=5b257d0b-6e1a-4916-a309-84979919b252 name=/runtime.v1.RuntimeService/Version
	Aug 06 00:15:37 pause-161508 crio[2471]: time="2024-08-06 00:15:37.013052575Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=69a0afc8-b483-419a-9ee9-da6d7533a787 name=/runtime.v1.ImageService/ImageFsInfo
	Aug 06 00:15:37 pause-161508 crio[2471]: time="2024-08-06 00:15:37.013629560Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1722903337013600406,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:124365,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=69a0afc8-b483-419a-9ee9-da6d7533a787 name=/runtime.v1.ImageService/ImageFsInfo
	Aug 06 00:15:37 pause-161508 crio[2471]: time="2024-08-06 00:15:37.014231712Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=5ed87ed4-b90c-4b0b-8b40-ffabe5756814 name=/runtime.v1.RuntimeService/ListContainers
	Aug 06 00:15:37 pause-161508 crio[2471]: time="2024-08-06 00:15:37.014305643Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=5ed87ed4-b90c-4b0b-8b40-ffabe5756814 name=/runtime.v1.RuntimeService/ListContainers
	Aug 06 00:15:37 pause-161508 crio[2471]: time="2024-08-06 00:15:37.014697734Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:bb38fda641e398e7269c4fc98840654d4ef417ccc04c0dbf6c34580362b741dc,PodSandboxId:11fed89ca356a76abf9f5cf4a8cb9b1d34a89a2c434ff78a4f706070f378a78c,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:2,},Image:&ImageSpec{Image:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,State:CONTAINER_RUNNING,CreatedAt:1722903317683439547,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-55wbx,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d90e043a-0525-4e59-9712-70116590d766,},Annotations:map[string]string{io.kubernetes.container.hash: acb1bb23,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessageP
ath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4086fd17ccf0a3abca003e8a74c3e9407ee2b4f844d50f018f01889b004f2e72,PodSandboxId:777385c422e42d154fb7a8bb5b55b02aecb6d77ebfca355ae637275547f7ae8a,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1722903313913286196,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-pause-161508,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 515bfc503159b4bbe954e027b35cf1cb,},Annotations:map[string]string{io.kubernetes.container.hash: 574d5a6a,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.containe
r.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:982476c4266b39f507a2b02b008aa89568d49d4e23c11d16111623b19660630c,PodSandboxId:b9198d20e0c75cff4e61b5ff0ad932276cd4bd88de410bc9dbe4420f7e14b591,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,State:CONTAINER_RUNNING,CreatedAt:1722903313892238688,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-pause-161508,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e677cb0bf72cff2cfe647e5180a645c6,},Annotations:map[string]string{io.kubernetes.container.hash: 7337c8d9,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessa
gePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8549cef6ca6f2186a15e55ba9b40db7f6b2948b5ae1430b198aaf36324fe4d12,PodSandboxId:9c50be63bb0e17758fb1fc280928e9a5bdd051b8a4babb033e39846cb22d746b,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,State:CONTAINER_RUNNING,CreatedAt:1722903313862725842,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-pause-161508,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e853443f8265426dc355b3c076e12bba,},Annotations:map[string]string{io.kubernetes.container.hash: bcb4918f,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.termina
tionMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:bc6df04b5cb9b90f3374c1e2cd15ec1fb3a0df999fa901662eecfe2bb3d6ee58,PodSandboxId:8c802c9490a1a015c30e657e438b732d323d9ebadf946c75fe8583444defe9d3,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,State:CONTAINER_RUNNING,CreatedAt:1722903313860213377,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-pause-161508,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: aa0ba9a109192f9bf83e28dceb8ed1ab,},Annotations:map[string]string{io.kubernetes.container.hash: bf72a8bf,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy:
File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d7a29ddb2d7a6b8db6a21aa6442f10a220f961e45a0453bef7e140494e61f546,PodSandboxId:0a9567f716680b7eac2daf2c025fc1a51bb9618cc918b6ec21eedb02307b2a2b,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1722903297750733768,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-9wwqk,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 111220a5-a088-4652-a1a3-284f2d1b111b,},Annotations:map[string]string{io.kubernetes.container.hash: 227892e4,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"contain
erPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:adcecbbd6a938c51103d7edc01cd0855e22c469f90e20bf3e4a76fbd715a4744,PodSandboxId:11fed89ca356a76abf9f5cf4a8cb9b1d34a89a2c434ff78a4f706070f378a78c,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,State:CONTAINER_EXITED,CreatedAt:1722903296535177581,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-55wbx,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d90e043a-0525-4e59-9712-70116590d766,},Annotations:map[string]string{io.kubernetes.container.hash: acb1bb
23,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7d8cf53ea71f671cd11c77d76585125000808e1e5e9dbdf057515fae3694c8c2,PodSandboxId:b9198d20e0c75cff4e61b5ff0ad932276cd4bd88de410bc9dbe4420f7e14b591,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,State:CONTAINER_EXITED,CreatedAt:1722903296536992644,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-pause-161508,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e677cb0bf72cff2cfe647e5180a645c6,},Annotations:map[string]string{io.kubernetes.container.hash: 7337c8d9,io.kubernetes.co
ntainer.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6c3e3869967dcdea9538e99cfba9fa7cbeab8604b70330171ff36214ad65dc4f,PodSandboxId:777385c422e42d154fb7a8bb5b55b02aecb6d77ebfca355ae637275547f7ae8a,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_EXITED,CreatedAt:1722903296438781418,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-pause-161508,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 515bfc503159b4bbe954e027b35cf1cb,},Annotations:map[string]string{io.kubernetes.container.hash: 574d5a6a,io.kubernetes.container.restartCount: 1,io.kubernetes.container.t
erminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1bf2df2d254dca2dd27d3eae24da873f45a9ff1fbdfc0ea1dd1a35201bcd069a,PodSandboxId:9c50be63bb0e17758fb1fc280928e9a5bdd051b8a4babb033e39846cb22d746b,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,State:CONTAINER_EXITED,CreatedAt:1722903296303133201,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-pause-161508,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e853443f8265426dc355b3c076e12bba,},Annotations:map[string]string{io.kubernetes.container.hash: bcb4918f,io.kubernetes.container.restartCount: 1,io.kubernetes.con
tainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b5f13fe4c6e99948bd3db06aa7e20e2aa8073f836fe73e27f62926299efa70db,PodSandboxId:8c802c9490a1a015c30e657e438b732d323d9ebadf946c75fe8583444defe9d3,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,State:CONTAINER_EXITED,CreatedAt:1722903296335835381,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-pause-161508,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: aa0ba9a109192f9bf83e28dceb8ed1ab,},Annotations:map[string]string{io.kubernetes.container.hash: bf72a8bf,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationM
essagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e7bde654f01ecd95054cba7e1831b15349cfc28b44f4f1a6722bec18d022099a,PodSandboxId:cdbab9ce1e914d71878d039e4d5f1059541433a0180f911897309405ae8b389a,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1722903240464916282,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-9wwqk,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 111220a5-a088-4652-a1a3-284f2d1b111b,},Annotations:map[string]string{io.kubernetes.container.hash: 227892e4,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns
-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=5ed87ed4-b90c-4b0b-8b40-ffabe5756814 name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                              CREATED              STATE               NAME                      ATTEMPT             POD ID              POD
	bb38fda641e39       55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1   19 seconds ago       Running             kube-proxy                2                   11fed89ca356a       kube-proxy-55wbx
	4086fd17ccf0a       3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899   23 seconds ago       Running             etcd                      2                   777385c422e42       etcd-pause-161508
	982476c4266b3       3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2   23 seconds ago       Running             kube-scheduler            2                   b9198d20e0c75       kube-scheduler-pause-161508
	8549cef6ca6f2       76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e   23 seconds ago       Running             kube-controller-manager   2                   9c50be63bb0e1       kube-controller-manager-pause-161508
	bc6df04b5cb9b       1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d   23 seconds ago       Running             kube-apiserver            2                   8c802c9490a1a       kube-apiserver-pause-161508
	d7a29ddb2d7a6       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4   39 seconds ago       Running             coredns                   1                   0a9567f716680       coredns-7db6d8ff4d-9wwqk
	7d8cf53ea71f6       3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2   40 seconds ago       Exited              kube-scheduler            1                   b9198d20e0c75       kube-scheduler-pause-161508
	adcecbbd6a938       55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1   40 seconds ago       Exited              kube-proxy                1                   11fed89ca356a       kube-proxy-55wbx
	6c3e3869967dc       3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899   40 seconds ago       Exited              etcd                      1                   777385c422e42       etcd-pause-161508
	b5f13fe4c6e99       1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d   40 seconds ago       Exited              kube-apiserver            1                   8c802c9490a1a       kube-apiserver-pause-161508
	1bf2df2d254dc       76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e   40 seconds ago       Exited              kube-controller-manager   1                   9c50be63bb0e1       kube-controller-manager-pause-161508
	e7bde654f01ec       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4   About a minute ago   Exited              coredns                   0                   cdbab9ce1e914       coredns-7db6d8ff4d-9wwqk
	
	
	==> coredns [d7a29ddb2d7a6b8db6a21aa6442f10a220f961e45a0453bef7e140494e61f546] <==
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 6c8bd46af3d98e03c4ae8e438c65dd0c69a5f817565481bcf1725dd66ff794963b7938c81e3a23d4c2ad9e52f818076e819219c79e8007dd90564767ed68ba4c
	CoreDNS-1.11.1
	linux/amd64, go1.20.7, ae2bbc2
	[INFO] 127.0.0.1:33467 - 42429 "HINFO IN 3203146776900514644.2698836118367998909. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.01026401s
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Service: unknown (get services)
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Namespace: unknown (get namespaces)
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.EndpointSlice: unknown (get endpointslices.discovery.k8s.io)
	
	
	==> coredns [e7bde654f01ecd95054cba7e1831b15349cfc28b44f4f1a6722bec18d022099a] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 6c8bd46af3d98e03c4ae8e438c65dd0c69a5f817565481bcf1725dd66ff794963b7938c81e3a23d4c2ad9e52f818076e819219c79e8007dd90564767ed68ba4c
	CoreDNS-1.11.1
	linux/amd64, go1.20.7, ae2bbc2
	[INFO] 127.0.0.1:41319 - 25821 "HINFO IN 2717171734076828573.5468262155880170471. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.014901881s
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[INFO] plugin/kubernetes: Trace[1233790655]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231 (06-Aug-2024 00:14:00.688) (total time: 30001ms):
	Trace[1233790655]: ---"Objects listed" error:Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout 30000ms (00:14:30.689)
	Trace[1233790655]: [30.001861477s] [30.001861477s] END
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[INFO] plugin/kubernetes: Trace[97412699]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231 (06-Aug-2024 00:14:00.690) (total time: 30000ms):
	Trace[97412699]: ---"Objects listed" error:Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout 30000ms (00:14:30.690)
	Trace[97412699]: [30.000839656s] [30.000839656s] END
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[INFO] plugin/kubernetes: Trace[382621574]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231 (06-Aug-2024 00:14:00.689) (total time: 30001ms):
	Trace[382621574]: ---"Objects listed" error:Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout 30000ms (00:14:30.689)
	Trace[382621574]: [30.001726412s] [30.001726412s] END
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/health: Going into lameduck mode for 5s
	
	
	==> describe nodes <==
	Name:               pause-161508
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=pause-161508
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=a179f531dd2dbe55e0d6074abcbc378280f91bb4
	                    minikube.k8s.io/name=pause-161508
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_08_06T00_13_45_0700
	                    minikube.k8s.io/version=v1.33.1
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Tue, 06 Aug 2024 00:13:42 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  pause-161508
	  AcquireTime:     <unset>
	  RenewTime:       Tue, 06 Aug 2024 00:15:27 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Tue, 06 Aug 2024 00:15:17 +0000   Tue, 06 Aug 2024 00:13:40 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Tue, 06 Aug 2024 00:15:17 +0000   Tue, 06 Aug 2024 00:13:40 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Tue, 06 Aug 2024 00:15:17 +0000   Tue, 06 Aug 2024 00:13:40 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Tue, 06 Aug 2024 00:15:17 +0000   Tue, 06 Aug 2024 00:13:45 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.118
	  Hostname:    pause-161508
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2015704Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2015704Ki
	  pods:               110
	System Info:
	  Machine ID:                 5ef8fca4ccaf4cb494720ebb268ed59b
	  System UUID:                5ef8fca4-ccaf-4cb4-9472-0ebb268ed59b
	  Boot ID:                    82a52a91-2eab-4313-92db-b2c395de80bd
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.30.3
	  Kube-Proxy Version:         v1.30.3
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (6 in total)
	  Namespace                   Name                                    CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                    ------------  ----------  ---------------  -------------  ---
	  kube-system                 coredns-7db6d8ff4d-9wwqk                100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     98s
	  kube-system                 etcd-pause-161508                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (5%!)(MISSING)       0 (0%!)(MISSING)         113s
	  kube-system                 kube-apiserver-pause-161508             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         113s
	  kube-system                 kube-controller-manager-pause-161508    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         113s
	  kube-system                 kube-proxy-55wbx                        0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         98s
	  kube-system                 kube-scheduler-pause-161508             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         113s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%!)(MISSING)  0 (0%!)(MISSING)
	  memory             170Mi (8%!)(MISSING)  170Mi (8%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                  From             Message
	  ----    ------                   ----                 ----             -------
	  Normal  Starting                 96s                  kube-proxy       
	  Normal  Starting                 19s                  kube-proxy       
	  Normal  Starting                 36s                  kube-proxy       
	  Normal  Starting                 119s                 kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  119s (x8 over 119s)  kubelet          Node pause-161508 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    119s (x8 over 119s)  kubelet          Node pause-161508 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     119s (x7 over 119s)  kubelet          Node pause-161508 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  119s                 kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasNoDiskPressure    113s                 kubelet          Node pause-161508 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientMemory  113s                 kubelet          Node pause-161508 status is now: NodeHasSufficientMemory
	  Normal  NodeHasSufficientPID     113s                 kubelet          Node pause-161508 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  113s                 kubelet          Updated Node Allocatable limit across pods
	  Normal  Starting                 113s                 kubelet          Starting kubelet.
	  Normal  NodeReady                112s                 kubelet          Node pause-161508 status is now: NodeReady
	  Normal  RegisteredNode           99s                  node-controller  Node pause-161508 event: Registered Node pause-161508 in Controller
	  Normal  Starting                 24s                  kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  24s (x8 over 24s)    kubelet          Node pause-161508 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    24s (x8 over 24s)    kubelet          Node pause-161508 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     24s (x7 over 24s)    kubelet          Node pause-161508 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  24s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           7s                   node-controller  Node pause-161508 event: Registered Node pause-161508 in Controller
	
	
	==> dmesg <==
	[  +8.233226] systemd-fstab-generator[604]: Ignoring "noauto" option for root device
	[  +0.061945] kauditd_printk_skb: 1 callbacks suppressed
	[  +0.055087] systemd-fstab-generator[616]: Ignoring "noauto" option for root device
	[  +0.174348] systemd-fstab-generator[630]: Ignoring "noauto" option for root device
	[  +0.179878] systemd-fstab-generator[643]: Ignoring "noauto" option for root device
	[  +0.309043] systemd-fstab-generator[673]: Ignoring "noauto" option for root device
	[  +4.774092] systemd-fstab-generator[768]: Ignoring "noauto" option for root device
	[  +0.067731] kauditd_printk_skb: 130 callbacks suppressed
	[  +4.074764] systemd-fstab-generator[951]: Ignoring "noauto" option for root device
	[  +1.091143] kauditd_printk_skb: 57 callbacks suppressed
	[  +4.990784] systemd-fstab-generator[1289]: Ignoring "noauto" option for root device
	[  +0.098966] kauditd_printk_skb: 30 callbacks suppressed
	[ +14.741716] systemd-fstab-generator[1519]: Ignoring "noauto" option for root device
	[  +0.154545] kauditd_printk_skb: 21 callbacks suppressed
	[Aug 6 00:14] kauditd_printk_skb: 84 callbacks suppressed
	[ +42.628608] systemd-fstab-generator[2389]: Ignoring "noauto" option for root device
	[  +0.156399] systemd-fstab-generator[2401]: Ignoring "noauto" option for root device
	[  +0.236133] systemd-fstab-generator[2415]: Ignoring "noauto" option for root device
	[  +0.182052] systemd-fstab-generator[2427]: Ignoring "noauto" option for root device
	[  +0.317694] systemd-fstab-generator[2455]: Ignoring "noauto" option for root device
	[  +2.061400] systemd-fstab-generator[2766]: Ignoring "noauto" option for root device
	[Aug 6 00:15] kauditd_printk_skb: 195 callbacks suppressed
	[ +10.990703] systemd-fstab-generator[3379]: Ignoring "noauto" option for root device
	[ +17.423311] systemd-fstab-generator[3773]: Ignoring "noauto" option for root device
	[  +0.067034] kauditd_printk_skb: 46 callbacks suppressed
	
	
	==> etcd [4086fd17ccf0a3abca003e8a74c3e9407ee2b4f844d50f018f01889b004f2e72] <==
	{"level":"info","ts":"2024-08-06T00:15:14.33504Z","caller":"fileutil/purge.go:50","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/snap","suffix":"snap","max":5,"interval":"30s"}
	{"level":"info","ts":"2024-08-06T00:15:14.335592Z","caller":"fileutil/purge.go:50","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/wal","suffix":"wal","max":5,"interval":"30s"}
	{"level":"info","ts":"2024-08-06T00:15:14.335908Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"86c29206b457f123 switched to configuration voters=(9710484304057332003)"}
	{"level":"info","ts":"2024-08-06T00:15:14.359437Z","caller":"membership/cluster.go:421","msg":"added member","cluster-id":"56e4fbef5627b38f","local-member-id":"86c29206b457f123","added-peer-id":"86c29206b457f123","added-peer-peer-urls":["https://192.168.39.118:2380"]}
	{"level":"info","ts":"2024-08-06T00:15:14.359674Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"56e4fbef5627b38f","local-member-id":"86c29206b457f123","cluster-version":"3.5"}
	{"level":"info","ts":"2024-08-06T00:15:14.359729Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2024-08-06T00:15:14.36122Z","caller":"embed/etcd.go:726","msg":"starting with client TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
	{"level":"info","ts":"2024-08-06T00:15:14.370815Z","caller":"embed/etcd.go:277","msg":"now serving peer/client/metrics","local-member-id":"86c29206b457f123","initial-advertise-peer-urls":["https://192.168.39.118:2380"],"listen-peer-urls":["https://192.168.39.118:2380"],"advertise-client-urls":["https://192.168.39.118:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.168.39.118:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2024-08-06T00:15:14.370952Z","caller":"embed/etcd.go:857","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2024-08-06T00:15:14.364797Z","caller":"embed/etcd.go:597","msg":"serving peer traffic","address":"192.168.39.118:2380"}
	{"level":"info","ts":"2024-08-06T00:15:14.371072Z","caller":"embed/etcd.go:569","msg":"cmux::serve","address":"192.168.39.118:2380"}
	{"level":"info","ts":"2024-08-06T00:15:15.896345Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"86c29206b457f123 is starting a new election at term 3"}
	{"level":"info","ts":"2024-08-06T00:15:15.896417Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"86c29206b457f123 became pre-candidate at term 3"}
	{"level":"info","ts":"2024-08-06T00:15:15.896453Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"86c29206b457f123 received MsgPreVoteResp from 86c29206b457f123 at term 3"}
	{"level":"info","ts":"2024-08-06T00:15:15.896469Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"86c29206b457f123 became candidate at term 4"}
	{"level":"info","ts":"2024-08-06T00:15:15.896477Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"86c29206b457f123 received MsgVoteResp from 86c29206b457f123 at term 4"}
	{"level":"info","ts":"2024-08-06T00:15:15.896488Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"86c29206b457f123 became leader at term 4"}
	{"level":"info","ts":"2024-08-06T00:15:15.896497Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: 86c29206b457f123 elected leader 86c29206b457f123 at term 4"}
	{"level":"info","ts":"2024-08-06T00:15:15.903042Z","caller":"etcdserver/server.go:2068","msg":"published local member to cluster through raft","local-member-id":"86c29206b457f123","local-member-attributes":"{Name:pause-161508 ClientURLs:[https://192.168.39.118:2379]}","request-path":"/0/members/86c29206b457f123/attributes","cluster-id":"56e4fbef5627b38f","publish-timeout":"7s"}
	{"level":"info","ts":"2024-08-06T00:15:15.903108Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-08-06T00:15:15.903234Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-08-06T00:15:15.903697Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2024-08-06T00:15:15.903754Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2024-08-06T00:15:15.905385Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.39.118:2379"}
	{"level":"info","ts":"2024-08-06T00:15:15.905669Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	
	
	==> etcd [6c3e3869967dcdea9538e99cfba9fa7cbeab8604b70330171ff36214ad65dc4f] <==
	{"level":"info","ts":"2024-08-06T00:14:58.237044Z","caller":"embed/etcd.go:569","msg":"cmux::serve","address":"192.168.39.118:2380"}
	{"level":"info","ts":"2024-08-06T00:14:59.734339Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"86c29206b457f123 is starting a new election at term 2"}
	{"level":"info","ts":"2024-08-06T00:14:59.734401Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"86c29206b457f123 became pre-candidate at term 2"}
	{"level":"info","ts":"2024-08-06T00:14:59.734451Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"86c29206b457f123 received MsgPreVoteResp from 86c29206b457f123 at term 2"}
	{"level":"info","ts":"2024-08-06T00:14:59.73447Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"86c29206b457f123 became candidate at term 3"}
	{"level":"info","ts":"2024-08-06T00:14:59.734478Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"86c29206b457f123 received MsgVoteResp from 86c29206b457f123 at term 3"}
	{"level":"info","ts":"2024-08-06T00:14:59.734488Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"86c29206b457f123 became leader at term 3"}
	{"level":"info","ts":"2024-08-06T00:14:59.734498Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: 86c29206b457f123 elected leader 86c29206b457f123 at term 3"}
	{"level":"info","ts":"2024-08-06T00:14:59.740513Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-08-06T00:14:59.740468Z","caller":"etcdserver/server.go:2068","msg":"published local member to cluster through raft","local-member-id":"86c29206b457f123","local-member-attributes":"{Name:pause-161508 ClientURLs:[https://192.168.39.118:2379]}","request-path":"/0/members/86c29206b457f123/attributes","cluster-id":"56e4fbef5627b38f","publish-timeout":"7s"}
	{"level":"info","ts":"2024-08-06T00:14:59.741621Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-08-06T00:14:59.74188Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2024-08-06T00:14:59.741896Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2024-08-06T00:14:59.743751Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.39.118:2379"}
	{"level":"info","ts":"2024-08-06T00:14:59.744151Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2024-08-06T00:15:01.441715Z","caller":"osutil/interrupt_unix.go:64","msg":"received signal; shutting down","signal":"terminated"}
	{"level":"info","ts":"2024-08-06T00:15:01.441787Z","caller":"embed/etcd.go:375","msg":"closing etcd server","name":"pause-161508","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.39.118:2380"],"advertise-client-urls":["https://192.168.39.118:2379"]}
	{"level":"warn","ts":"2024-08-06T00:15:01.441867Z","caller":"embed/serve.go:212","msg":"stopping secure grpc server due to error","error":"accept tcp 192.168.39.118:2379: use of closed network connection"}
	{"level":"warn","ts":"2024-08-06T00:15:01.441907Z","caller":"embed/serve.go:214","msg":"stopped secure grpc server due to error","error":"accept tcp 192.168.39.118:2379: use of closed network connection"}
	{"level":"warn","ts":"2024-08-06T00:15:01.443226Z","caller":"embed/serve.go:212","msg":"stopping secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
	{"level":"warn","ts":"2024-08-06T00:15:01.443325Z","caller":"embed/serve.go:214","msg":"stopped secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
	{"level":"info","ts":"2024-08-06T00:15:01.468325Z","caller":"etcdserver/server.go:1471","msg":"skipped leadership transfer for single voting member cluster","local-member-id":"86c29206b457f123","current-leader-member-id":"86c29206b457f123"}
	{"level":"info","ts":"2024-08-06T00:15:01.476372Z","caller":"embed/etcd.go:579","msg":"stopping serving peer traffic","address":"192.168.39.118:2380"}
	{"level":"info","ts":"2024-08-06T00:15:01.476616Z","caller":"embed/etcd.go:584","msg":"stopped serving peer traffic","address":"192.168.39.118:2380"}
	{"level":"info","ts":"2024-08-06T00:15:01.47663Z","caller":"embed/etcd.go:377","msg":"closed etcd server","name":"pause-161508","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.39.118:2380"],"advertise-client-urls":["https://192.168.39.118:2379"]}
	
	
	==> kernel <==
	 00:15:37 up 2 min,  0 users,  load average: 0.78, 0.32, 0.12
	Linux pause-161508 5.10.207 #1 SMP Mon Jul 29 15:19:02 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kube-apiserver [b5f13fe4c6e99948bd3db06aa7e20e2aa8073f836fe73e27f62926299efa70db] <==
	W0806 00:15:10.715829       1 logging.go:59] [core] [Channel #166 SubChannel #167] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0806 00:15:10.734882       1 logging.go:59] [core] [Channel #91 SubChannel #92] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0806 00:15:10.744080       1 logging.go:59] [core] [Channel #115 SubChannel #116] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0806 00:15:10.745607       1 logging.go:59] [core] [Channel #154 SubChannel #155] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0806 00:15:10.756926       1 logging.go:59] [core] [Channel #64 SubChannel #65] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0806 00:15:10.758407       1 logging.go:59] [core] [Channel #2 SubChannel #4] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0806 00:15:10.761067       1 logging.go:59] [core] [Channel #97 SubChannel #98] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0806 00:15:10.770177       1 logging.go:59] [core] [Channel #148 SubChannel #149] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0806 00:15:10.861145       1 logging.go:59] [core] [Channel #178 SubChannel #179] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0806 00:15:10.868174       1 logging.go:59] [core] [Channel #10 SubChannel #11] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0806 00:15:10.886010       1 logging.go:59] [core] [Channel #55 SubChannel #56] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0806 00:15:10.922363       1 logging.go:59] [core] [Channel #103 SubChannel #104] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0806 00:15:10.957873       1 logging.go:59] [core] [Channel #133 SubChannel #134] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0806 00:15:10.980173       1 logging.go:59] [core] [Channel #49 SubChannel #50] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0806 00:15:11.017777       1 logging.go:59] [core] [Channel #15 SubChannel #16] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0806 00:15:11.052290       1 logging.go:59] [core] [Channel #175 SubChannel #176] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0806 00:15:11.079236       1 logging.go:59] [core] [Channel #127 SubChannel #128] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0806 00:15:11.122637       1 logging.go:59] [core] [Channel #5 SubChannel #6] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0806 00:15:11.159418       1 logging.go:59] [core] [Channel #1 SubChannel #3] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0806 00:15:11.237890       1 logging.go:59] [core] [Channel #151 SubChannel #152] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0806 00:15:11.283014       1 logging.go:59] [core] [Channel #31 SubChannel #32] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0806 00:15:11.291202       1 logging.go:59] [core] [Channel #112 SubChannel #113] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0806 00:15:11.326767       1 logging.go:59] [core] [Channel #100 SubChannel #101] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0806 00:15:11.354907       1 logging.go:59] [core] [Channel #61 SubChannel #62] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0806 00:15:11.356406       1 logging.go:59] [core] [Channel #145 SubChannel #146] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	
	
	==> kube-apiserver [bc6df04b5cb9b90f3374c1e2cd15ec1fb3a0df999fa901662eecfe2bb3d6ee58] <==
	I0806 00:15:17.166812       1 controller.go:87] Starting OpenAPI V3 controller
	I0806 00:15:17.166890       1 naming_controller.go:291] Starting NamingConditionController
	I0806 00:15:17.166925       1 establishing_controller.go:76] Starting EstablishingController
	I0806 00:15:17.166961       1 nonstructuralschema_controller.go:192] Starting NonStructuralSchemaConditionController
	I0806 00:15:17.167022       1 apiapproval_controller.go:186] Starting KubernetesAPIApprovalPolicyConformantConditionController
	I0806 00:15:17.204063       1 cache.go:39] Caches are synced for AvailableConditionController controller
	I0806 00:15:17.204128       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I0806 00:15:17.208893       1 shared_informer.go:320] Caches are synced for *generic.policySource[*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicy,*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicyBinding,k8s.io/apiserver/pkg/admission/plugin/policy/validating.Validator]
	I0806 00:15:17.208979       1 policy_source.go:224] refreshing policies
	I0806 00:15:17.221917       1 handler_discovery.go:447] Starting ResourceDiscoveryManager
	I0806 00:15:17.222035       1 shared_informer.go:320] Caches are synced for configmaps
	I0806 00:15:17.226941       1 apf_controller.go:379] Running API Priority and Fairness config worker
	I0806 00:15:17.226978       1 apf_controller.go:382] Running API Priority and Fairness periodic rebalancing process
	I0806 00:15:17.227236       1 shared_informer.go:320] Caches are synced for cluster_authentication_trust_controller
	I0806 00:15:17.234429       1 controller.go:615] quota admission added evaluator for: leases.coordination.k8s.io
	I0806 00:15:17.246328       1 cache.go:39] Caches are synced for autoregister controller
	I0806 00:15:17.308496       1 shared_informer.go:320] Caches are synced for node_authorizer
	I0806 00:15:18.107653       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I0806 00:15:18.751435       1 controller.go:615] quota admission added evaluator for: serviceaccounts
	I0806 00:15:18.764887       1 controller.go:615] quota admission added evaluator for: deployments.apps
	I0806 00:15:18.812579       1 controller.go:615] quota admission added evaluator for: daemonsets.apps
	I0806 00:15:18.846488       1 controller.go:615] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I0806 00:15:18.853746       1 controller.go:615] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I0806 00:15:30.560150       1 controller.go:615] quota admission added evaluator for: endpoints
	I0806 00:15:30.572888       1 controller.go:615] quota admission added evaluator for: endpointslices.discovery.k8s.io
	
	
	==> kube-controller-manager [1bf2df2d254dca2dd27d3eae24da873f45a9ff1fbdfc0ea1dd1a35201bcd069a] <==
	I0806 00:14:58.507409       1 serving.go:380] Generated self-signed cert in-memory
	I0806 00:14:58.769450       1 controllermanager.go:189] "Starting" version="v1.30.3"
	I0806 00:14:58.769491       1 controllermanager.go:191] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0806 00:14:58.771374       1 secure_serving.go:213] Serving securely on 127.0.0.1:10257
	I0806 00:14:58.771670       1 tlsconfig.go:240] "Starting DynamicServingCertificateController"
	I0806 00:14:58.771878       1 dynamic_cafile_content.go:157] "Starting controller" name="request-header::/var/lib/minikube/certs/front-proxy-ca.crt"
	I0806 00:14:58.772106       1 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/var/lib/minikube/certs/ca.crt"
	
	
	==> kube-controller-manager [8549cef6ca6f2186a15e55ba9b40db7f6b2948b5ae1430b198aaf36324fe4d12] <==
	I0806 00:15:30.440283       1 shared_informer.go:320] Caches are synced for ephemeral
	I0806 00:15:30.464363       1 shared_informer.go:320] Caches are synced for expand
	I0806 00:15:30.464791       1 shared_informer.go:320] Caches are synced for TTL
	I0806 00:15:30.488334       1 shared_informer.go:320] Caches are synced for node
	I0806 00:15:30.488694       1 range_allocator.go:175] "Sending events to api server" logger="node-ipam-controller"
	I0806 00:15:30.488904       1 range_allocator.go:179] "Starting range CIDR allocator" logger="node-ipam-controller"
	I0806 00:15:30.488941       1 shared_informer.go:313] Waiting for caches to sync for cidrallocator
	I0806 00:15:30.488955       1 shared_informer.go:320] Caches are synced for cidrallocator
	I0806 00:15:30.489136       1 shared_informer.go:320] Caches are synced for persistent volume
	I0806 00:15:30.489375       1 shared_informer.go:320] Caches are synced for endpoint
	I0806 00:15:30.491103       1 shared_informer.go:320] Caches are synced for endpoint_slice_mirroring
	I0806 00:15:30.491240       1 shared_informer.go:320] Caches are synced for certificate-csrsigning-legacy-unknown
	I0806 00:15:30.505835       1 shared_informer.go:320] Caches are synced for ReplicaSet
	I0806 00:15:30.506057       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="113.389µs"
	I0806 00:15:30.519644       1 shared_informer.go:320] Caches are synced for disruption
	I0806 00:15:30.528692       1 shared_informer.go:320] Caches are synced for endpoint_slice
	I0806 00:15:30.553692       1 shared_informer.go:320] Caches are synced for resource quota
	I0806 00:15:30.567171       1 shared_informer.go:320] Caches are synced for resource quota
	I0806 00:15:30.570428       1 shared_informer.go:320] Caches are synced for crt configmap
	I0806 00:15:30.618994       1 shared_informer.go:320] Caches are synced for bootstrap_signer
	I0806 00:15:30.638027       1 shared_informer.go:320] Caches are synced for ClusterRoleAggregator
	I0806 00:15:30.660492       1 shared_informer.go:320] Caches are synced for attach detach
	I0806 00:15:31.097223       1 shared_informer.go:320] Caches are synced for garbage collector
	I0806 00:15:31.098610       1 shared_informer.go:320] Caches are synced for garbage collector
	I0806 00:15:31.098679       1 garbagecollector.go:157] "All resource monitors have synced. Proceeding to collect garbage" logger="garbage-collector-controller"
	
	
	==> kube-proxy [adcecbbd6a938c51103d7edc01cd0855e22c469f90e20bf3e4a76fbd715a4744] <==
	I0806 00:14:58.540143       1 server_linux.go:69] "Using iptables proxy"
	I0806 00:15:01.162283       1 server.go:1062] "Successfully retrieved node IP(s)" IPs=["192.168.39.118"]
	I0806 00:15:01.223251       1 server_linux.go:143] "No iptables support for family" ipFamily="IPv6"
	I0806 00:15:01.223333       1 server.go:661] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0806 00:15:01.223404       1 server_linux.go:165] "Using iptables Proxier"
	I0806 00:15:01.226436       1 proxier.go:243] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I0806 00:15:01.226740       1 server.go:872] "Version info" version="v1.30.3"
	I0806 00:15:01.226780       1 server.go:874] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0806 00:15:01.228263       1 config.go:192] "Starting service config controller"
	I0806 00:15:01.228331       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0806 00:15:01.228366       1 config.go:101] "Starting endpoint slice config controller"
	I0806 00:15:01.228389       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0806 00:15:01.229295       1 config.go:319] "Starting node config controller"
	I0806 00:15:01.229326       1 shared_informer.go:313] Waiting for caches to sync for node config
	
	
	==> kube-proxy [bb38fda641e398e7269c4fc98840654d4ef417ccc04c0dbf6c34580362b741dc] <==
	I0806 00:15:17.797479       1 server_linux.go:69] "Using iptables proxy"
	I0806 00:15:17.806358       1 server.go:1062] "Successfully retrieved node IP(s)" IPs=["192.168.39.118"]
	I0806 00:15:17.841357       1 server_linux.go:143] "No iptables support for family" ipFamily="IPv6"
	I0806 00:15:17.841443       1 server.go:661] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0806 00:15:17.841461       1 server_linux.go:165] "Using iptables Proxier"
	I0806 00:15:17.844034       1 proxier.go:243] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I0806 00:15:17.844295       1 server.go:872] "Version info" version="v1.30.3"
	I0806 00:15:17.844322       1 server.go:874] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0806 00:15:17.845359       1 config.go:192] "Starting service config controller"
	I0806 00:15:17.845432       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0806 00:15:17.845458       1 config.go:101] "Starting endpoint slice config controller"
	I0806 00:15:17.845461       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0806 00:15:17.846027       1 config.go:319] "Starting node config controller"
	I0806 00:15:17.846058       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0806 00:15:17.945622       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I0806 00:15:17.945710       1 shared_informer.go:320] Caches are synced for service config
	I0806 00:15:17.946371       1 shared_informer.go:320] Caches are synced for node config
	
	
	==> kube-scheduler [7d8cf53ea71f671cd11c77d76585125000808e1e5e9dbdf057515fae3694c8c2] <==
	I0806 00:14:58.557483       1 serving.go:380] Generated self-signed cert in-memory
	W0806 00:15:01.111136       1 requestheader_controller.go:193] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W0806 00:15:01.113702       1 authentication.go:368] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W0806 00:15:01.113837       1 authentication.go:369] Continuing without authentication configuration. This may treat all requests as anonymous.
	W0806 00:15:01.113933       1 authentication.go:370] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I0806 00:15:01.174034       1 server.go:154] "Starting Kubernetes Scheduler" version="v1.30.3"
	I0806 00:15:01.174179       1 server.go:156] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0806 00:15:01.179469       1 secure_serving.go:213] Serving securely on 127.0.0.1:10259
	I0806 00:15:01.180394       1 configmap_cafile_content.go:202] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0806 00:15:01.180446       1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	E0806 00:15:01.180465       1 shared_informer.go:316] unable to sync caches for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0806 00:15:01.180473       1 configmap_cafile_content.go:210] "Shutting down controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0806 00:15:01.180592       1 tlsconfig.go:240] "Starting DynamicServingCertificateController"
	I0806 00:15:01.180622       1 tlsconfig.go:255] "Shutting down DynamicServingCertificateController"
	I0806 00:15:01.180731       1 secure_serving.go:258] Stopped listening on 127.0.0.1:10259
	E0806 00:15:01.181208       1 server.go:214] "waiting for handlers to sync" err="context canceled"
	E0806 00:15:01.181390       1 run.go:74] "command failed" err="finished without leader elect"
	
	
	==> kube-scheduler [982476c4266b39f507a2b02b008aa89568d49d4e23c11d16111623b19660630c] <==
	I0806 00:15:15.108125       1 serving.go:380] Generated self-signed cert in-memory
	W0806 00:15:17.175012       1 requestheader_controller.go:193] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W0806 00:15:17.175372       1 authentication.go:368] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W0806 00:15:17.175432       1 authentication.go:369] Continuing without authentication configuration. This may treat all requests as anonymous.
	W0806 00:15:17.175457       1 authentication.go:370] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I0806 00:15:17.227135       1 server.go:154] "Starting Kubernetes Scheduler" version="v1.30.3"
	I0806 00:15:17.227194       1 server.go:156] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0806 00:15:17.230858       1 secure_serving.go:213] Serving securely on 127.0.0.1:10259
	I0806 00:15:17.231051       1 configmap_cafile_content.go:202] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0806 00:15:17.231099       1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0806 00:15:17.231131       1 tlsconfig.go:240] "Starting DynamicServingCertificateController"
	I0806 00:15:17.331870       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Aug 06 00:15:13 pause-161508 kubelet[3386]: I0806 00:15:13.606268    3386 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/aa0ba9a109192f9bf83e28dceb8ed1ab-usr-share-ca-certificates\") pod \"kube-apiserver-pause-161508\" (UID: \"aa0ba9a109192f9bf83e28dceb8ed1ab\") " pod="kube-system/kube-apiserver-pause-161508"
	Aug 06 00:15:13 pause-161508 kubelet[3386]: I0806 00:15:13.606290    3386 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/e853443f8265426dc355b3c076e12bba-ca-certs\") pod \"kube-controller-manager-pause-161508\" (UID: \"e853443f8265426dc355b3c076e12bba\") " pod="kube-system/kube-controller-manager-pause-161508"
	Aug 06 00:15:13 pause-161508 kubelet[3386]: I0806 00:15:13.606317    3386 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/e853443f8265426dc355b3c076e12bba-k8s-certs\") pod \"kube-controller-manager-pause-161508\" (UID: \"e853443f8265426dc355b3c076e12bba\") " pod="kube-system/kube-controller-manager-pause-161508"
	Aug 06 00:15:13 pause-161508 kubelet[3386]: I0806 00:15:13.606371    3386 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/e853443f8265426dc355b3c076e12bba-kubeconfig\") pod \"kube-controller-manager-pause-161508\" (UID: \"e853443f8265426dc355b3c076e12bba\") " pod="kube-system/kube-controller-manager-pause-161508"
	Aug 06 00:15:13 pause-161508 kubelet[3386]: I0806 00:15:13.700306    3386 kubelet_node_status.go:73] "Attempting to register node" node="pause-161508"
	Aug 06 00:15:13 pause-161508 kubelet[3386]: E0806 00:15:13.701242    3386 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://control-plane.minikube.internal:8443/api/v1/nodes\": dial tcp 192.168.39.118:8443: connect: connection refused" node="pause-161508"
	Aug 06 00:15:13 pause-161508 kubelet[3386]: I0806 00:15:13.839147    3386 scope.go:117] "RemoveContainer" containerID="6c3e3869967dcdea9538e99cfba9fa7cbeab8604b70330171ff36214ad65dc4f"
	Aug 06 00:15:13 pause-161508 kubelet[3386]: I0806 00:15:13.840383    3386 scope.go:117] "RemoveContainer" containerID="1bf2df2d254dca2dd27d3eae24da873f45a9ff1fbdfc0ea1dd1a35201bcd069a"
	Aug 06 00:15:13 pause-161508 kubelet[3386]: I0806 00:15:13.841284    3386 scope.go:117] "RemoveContainer" containerID="b5f13fe4c6e99948bd3db06aa7e20e2aa8073f836fe73e27f62926299efa70db"
	Aug 06 00:15:13 pause-161508 kubelet[3386]: I0806 00:15:13.843805    3386 scope.go:117] "RemoveContainer" containerID="7d8cf53ea71f671cd11c77d76585125000808e1e5e9dbdf057515fae3694c8c2"
	Aug 06 00:15:14 pause-161508 kubelet[3386]: E0806 00:15:14.005777    3386 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://control-plane.minikube.internal:8443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/pause-161508?timeout=10s\": dial tcp 192.168.39.118:8443: connect: connection refused" interval="800ms"
	Aug 06 00:15:14 pause-161508 kubelet[3386]: I0806 00:15:14.106201    3386 kubelet_node_status.go:73] "Attempting to register node" node="pause-161508"
	Aug 06 00:15:14 pause-161508 kubelet[3386]: E0806 00:15:14.107826    3386 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://control-plane.minikube.internal:8443/api/v1/nodes\": dial tcp 192.168.39.118:8443: connect: connection refused" node="pause-161508"
	Aug 06 00:15:14 pause-161508 kubelet[3386]: I0806 00:15:14.909385    3386 kubelet_node_status.go:73] "Attempting to register node" node="pause-161508"
	Aug 06 00:15:17 pause-161508 kubelet[3386]: I0806 00:15:17.273053    3386 kubelet_node_status.go:112] "Node was previously registered" node="pause-161508"
	Aug 06 00:15:17 pause-161508 kubelet[3386]: I0806 00:15:17.273266    3386 kubelet_node_status.go:76] "Successfully registered node" node="pause-161508"
	Aug 06 00:15:17 pause-161508 kubelet[3386]: I0806 00:15:17.275268    3386 kuberuntime_manager.go:1523] "Updating runtime config through cri with podcidr" CIDR="10.244.0.0/24"
	Aug 06 00:15:17 pause-161508 kubelet[3386]: I0806 00:15:17.276647    3386 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="10.244.0.0/24"
	Aug 06 00:15:17 pause-161508 kubelet[3386]: I0806 00:15:17.368804    3386 apiserver.go:52] "Watching apiserver"
	Aug 06 00:15:17 pause-161508 kubelet[3386]: I0806 00:15:17.371503    3386 topology_manager.go:215] "Topology Admit Handler" podUID="111220a5-a088-4652-a1a3-284f2d1b111b" podNamespace="kube-system" podName="coredns-7db6d8ff4d-9wwqk"
	Aug 06 00:15:17 pause-161508 kubelet[3386]: I0806 00:15:17.372608    3386 topology_manager.go:215] "Topology Admit Handler" podUID="d90e043a-0525-4e59-9712-70116590d766" podNamespace="kube-system" podName="kube-proxy-55wbx"
	Aug 06 00:15:17 pause-161508 kubelet[3386]: I0806 00:15:17.396598    3386 desired_state_of_world_populator.go:157] "Finished populating initial desired state of world"
	Aug 06 00:15:17 pause-161508 kubelet[3386]: I0806 00:15:17.456587    3386 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/d90e043a-0525-4e59-9712-70116590d766-xtables-lock\") pod \"kube-proxy-55wbx\" (UID: \"d90e043a-0525-4e59-9712-70116590d766\") " pod="kube-system/kube-proxy-55wbx"
	Aug 06 00:15:17 pause-161508 kubelet[3386]: I0806 00:15:17.457008    3386 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/d90e043a-0525-4e59-9712-70116590d766-lib-modules\") pod \"kube-proxy-55wbx\" (UID: \"d90e043a-0525-4e59-9712-70116590d766\") " pod="kube-system/kube-proxy-55wbx"
	Aug 06 00:15:17 pause-161508 kubelet[3386]: I0806 00:15:17.673804    3386 scope.go:117] "RemoveContainer" containerID="adcecbbd6a938c51103d7edc01cd0855e22c469f90e20bf3e4a76fbd715a4744"
	

                                                
                                                
-- /stdout --
** stderr ** 
	E0806 00:15:36.561082   62791 logs.go:258] failed to output last start logs: failed to read file /home/jenkins/minikube-integration/19373-9606/.minikube/logs/lastStart.txt: bufio.Scanner: token too long

                                                
                                                
** /stderr **
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p pause-161508 -n pause-161508
helpers_test.go:261: (dbg) Run:  kubectl --context pause-161508 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestPause/serial/SecondStartNoReconfiguration FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
--- FAIL: TestPause/serial/SecondStartNoReconfiguration (56.57s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/Start (7200.053s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p kindnet-200266 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=kindnet --driver=kvm2  --container-runtime=crio
panic: test timed out after 2h0m0s
running tests:
	TestNetworkPlugins (37m51s)
	TestNetworkPlugins/group/auto (48s)
	TestNetworkPlugins/group/auto/Start (48s)
	TestNetworkPlugins/group/kindnet (13s)
	TestNetworkPlugins/group/kindnet/Start (13s)
	TestStartStop (37m51s)
	TestStartStop/group/default-k8s-diff-port (27m1s)
	TestStartStop/group/default-k8s-diff-port/serial (27m1s)
	TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop (6m27s)
	TestStartStop/group/embed-certs (29m19s)
	TestStartStop/group/embed-certs/serial (29m19s)
	TestStartStop/group/embed-certs/serial/AddonExistsAfterStop (6m8s)

                                                
                                                
goroutine 8158 [running]:
testing.(*M).startAlarm.func1()
	/usr/local/go/src/testing/testing.go:2366 +0x385
created by time.goFunc
	/usr/local/go/src/time/sleep.go:177 +0x2d

                                                
                                                
goroutine 1 [chan receive, 30 minutes]:
testing.tRunner.func1()
	/usr/local/go/src/testing/testing.go:1650 +0x4ab
testing.tRunner(0xc0006571e0, 0xc00095fbb0)
	/usr/local/go/src/testing/testing.go:1695 +0x134
testing.runTests(0xc0007fa450, {0x49d6100, 0x2b, 0x2b}, {0x26b6a9e?, 0xc00065bb00?, 0x4a92c80?})
	/usr/local/go/src/testing/testing.go:2159 +0x445
testing.(*M).Run(0xc00086fae0)
	/usr/local/go/src/testing/testing.go:2027 +0x68b
k8s.io/minikube/test/integration.TestMain(0xc00086fae0)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/main_test.go:62 +0x8b
main.main()
	_testmain.go:133 +0x195

                                                
                                                
goroutine 10 [select]:
go.opencensus.io/stats/view.(*worker).start(0xc00056af00)
	/var/lib/jenkins/go/pkg/mod/go.opencensus.io@v0.24.0/stats/view/worker.go:292 +0x9f
created by go.opencensus.io/stats/view.init.0 in goroutine 1
	/var/lib/jenkins/go/pkg/mod/go.opencensus.io@v0.24.0/stats/view/worker.go:34 +0x8d

                                                
                                                
goroutine 6471 [select]:
k8s.io/apimachinery/pkg/util/wait.loopConditionUntilContext({0x36be9c0, 0xc000787900}, {0x36b2140, 0xc00157dac0}, 0x1, 0x0, 0xc00006fb40)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.3/pkg/util/wait/loop.go:66 +0x1e6
k8s.io/apimachinery/pkg/util/wait.PollUntilContextTimeout({0x36bea30?, 0xc0003c6000?}, 0x3b9aca00, 0xc00006fd38?, 0x1, 0xc00006fb40)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.3/pkg/util/wait/poll.go:48 +0xa5
k8s.io/minikube/test/integration.PodWait({0x36bea30, 0xc0003c6000}, 0xc0013711e0, {0xc001922000, 0x12}, {0x26820d2, 0x14}, {0x2699cad, 0x1c}, 0x7dba821800)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/helpers_test.go:371 +0x385
k8s.io/minikube/test/integration.validateAddonAfterStop({0x36bea30, 0xc0003c6000}, 0xc0013711e0, {0xc001922000, 0x12}, {0x2669468?, 0xc001a70f60?}, {0x551133?, 0x4a170f?}, {0xc00093bb00, ...})
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/start_stop_delete_test.go:287 +0x13b
k8s.io/minikube/test/integration.TestStartStop.func1.1.1.1(0xc0013711e0)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/start_stop_delete_test.go:156 +0x66
testing.tRunner(0xc0013711e0, 0xc001d9a000)
	/usr/local/go/src/testing/testing.go:1689 +0xfb
created by testing.(*T).Run in goroutine 3160
	/usr/local/go/src/testing/testing.go:1742 +0x390

                                                
                                                
goroutine 3341 [sync.Cond.Wait, 3 minutes]:
sync.runtime_notifyListWait(0xc0019246d0, 0x3)
	/usr/local/go/src/runtime/sema.go:569 +0x159
sync.(*Cond).Wait(0x2148ba0?)
	/usr/local/go/src/sync/cond.go:70 +0x85
k8s.io/client-go/util/workqueue.(*Type).Get(0xc00198baa0)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.3/util/workqueue/queue.go:200 +0x93
k8s.io/client-go/transport.(*dynamicClientCert).processNextWorkItem(0xc001924700)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.3/transport/cert_rotation.go:156 +0x47
k8s.io/client-go/transport.(*dynamicClientCert).runWorker(...)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.3/transport/cert_rotation.go:151
k8s.io/apimachinery/pkg/util/wait.BackoffUntil.func1(0x30?)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.3/pkg/util/wait/backoff.go:226 +0x33
k8s.io/apimachinery/pkg/util/wait.BackoffUntil(0xc0007d0050, {0x369aac0, 0xc0016160f0}, 0x1, 0xc0000602a0)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.3/pkg/util/wait/backoff.go:227 +0xaf
k8s.io/apimachinery/pkg/util/wait.JitterUntil(0xc0007d0050, 0x3b9aca00, 0x0, 0x1, 0xc0000602a0)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.3/pkg/util/wait/backoff.go:204 +0x7f
k8s.io/apimachinery/pkg/util/wait.Until(...)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.3/pkg/util/wait/backoff.go:161
created by k8s.io/client-go/transport.(*dynamicClientCert).Run in goroutine 3421
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.3/transport/cert_rotation.go:140 +0x1ef

                                                
                                                
goroutine 14 [select]:
k8s.io/klog/v2.(*flushDaemon).run.func1()
	/var/lib/jenkins/go/pkg/mod/k8s.io/klog/v2@v2.130.1/klog.go:1141 +0x117
created by k8s.io/klog/v2.(*flushDaemon).run in goroutine 13
	/var/lib/jenkins/go/pkg/mod/k8s.io/klog/v2@v2.130.1/klog.go:1137 +0x171

                                                
                                                
goroutine 7965 [syscall]:
syscall.Syscall6(0xf7, 0x1, 0x125d4, 0xc001419bd0, 0x1000004, 0x0, 0x0)
	/usr/local/go/src/syscall/syscall_linux.go:91 +0x39
os.(*Process).blockUntilWaitable(0xc0017ca660)
	/usr/local/go/src/os/wait_waitid.go:32 +0x76
os.(*Process).wait(0xc0017ca660)
	/usr/local/go/src/os/exec_unix.go:22 +0x25
os.(*Process).Wait(...)
	/usr/local/go/src/os/exec.go:134
os/exec.(*Cmd).Wait(0xc0014e2300)
	/usr/local/go/src/os/exec/exec.go:901 +0x45
os/exec.(*Cmd).Run(0xc0014e2300)
	/usr/local/go/src/os/exec/exec.go:608 +0x2d
k8s.io/minikube/test/integration.Run(0xc0009d69c0, 0xc0014e2300)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/helpers_test.go:103 +0x1e5
k8s.io/minikube/test/integration.TestNetworkPlugins.func1.1.1(0xc0009d69c0)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/net_test.go:112 +0x52
testing.tRunner(0xc0009d69c0, 0xc00187c2d0)
	/usr/local/go/src/testing/testing.go:1689 +0xfb
created by testing.(*T).Run in goroutine 2596
	/usr/local/go/src/testing/testing.go:1742 +0x390

                                                
                                                
goroutine 3235 [select, 3 minutes]:
k8s.io/apimachinery/pkg/util/wait.waitForWithContext({0x36bebf0, 0xc0000602a0}, 0xc0015d3750, 0xc0000a9f98)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.3/pkg/util/wait/wait.go:205 +0xd1
k8s.io/apimachinery/pkg/util/wait.poll({0x36bebf0, 0xc0000602a0}, 0xc0?, 0xc0015d3750, 0xc0015d3798)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.3/pkg/util/wait/poll.go:260 +0x89
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext({0x36bebf0?, 0xc0000602a0?}, 0x6db57a?, 0x7b8e18?)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.3/pkg/util/wait/poll.go:200 +0x53
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntil(0xc0015d37d0?, 0x592e44?, 0xc000060fc0?)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.3/pkg/util/wait/poll.go:187 +0x3c
created by k8s.io/client-go/transport.(*dynamicClientCert).Run in goroutine 3210
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.3/transport/cert_rotation.go:142 +0x29a

                                                
                                                
goroutine 197 [chan receive, 115 minutes]:
k8s.io/client-go/transport.(*dynamicClientCert).Run(0xc000a133c0, 0xc0000602a0)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.3/transport/cert_rotation.go:147 +0x2a9
created by k8s.io/client-go/transport.(*tlsTransportCache).get in goroutine 181
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.3/transport/cache.go:122 +0x585

                                                
                                                
goroutine 1495 [chan send, 100 minutes]:
os/exec.(*Cmd).watchCtx(0xc0016f3080, 0xc001a0dc80)
	/usr/local/go/src/os/exec/exec.go:793 +0x3ff
created by os/exec.(*Cmd).Start in goroutine 1494
	/usr/local/go/src/os/exec/exec.go:754 +0x976

                                                
                                                
goroutine 978 [sync.Cond.Wait, 2 minutes]:
sync.runtime_notifyListWait(0xc001a127d0, 0x29)
	/usr/local/go/src/runtime/sema.go:569 +0x159
sync.(*Cond).Wait(0x2148ba0?)
	/usr/local/go/src/sync/cond.go:70 +0x85
k8s.io/client-go/util/workqueue.(*Type).Get(0xc0008fd320)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.3/util/workqueue/queue.go:200 +0x93
k8s.io/client-go/transport.(*dynamicClientCert).processNextWorkItem(0xc001a12800)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.3/transport/cert_rotation.go:156 +0x47
k8s.io/client-go/transport.(*dynamicClientCert).runWorker(...)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.3/transport/cert_rotation.go:151
k8s.io/apimachinery/pkg/util/wait.BackoffUntil.func1(0x30?)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.3/pkg/util/wait/backoff.go:226 +0x33
k8s.io/apimachinery/pkg/util/wait.BackoffUntil(0xc0008b07f0, {0x369aac0, 0xc000925260}, 0x1, 0xc0000602a0)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.3/pkg/util/wait/backoff.go:227 +0xaf
k8s.io/apimachinery/pkg/util/wait.JitterUntil(0xc0008b07f0, 0x3b9aca00, 0x0, 0x1, 0xc0000602a0)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.3/pkg/util/wait/backoff.go:204 +0x7f
k8s.io/apimachinery/pkg/util/wait.Until(...)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.3/pkg/util/wait/backoff.go:161
created by k8s.io/client-go/transport.(*dynamicClientCert).Run in goroutine 963
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.3/transport/cert_rotation.go:140 +0x1ef

                                                
                                                
goroutine 234 [select, 115 minutes]:
net/http.(*persistConn).readLoop(0xc00099fd40)
	/usr/local/go/src/net/http/transport.go:2261 +0xd3a
created by net/http.(*Transport).dialConn in goroutine 219
	/usr/local/go/src/net/http/transport.go:1799 +0x152f

                                                
                                                
goroutine 217 [select, 115 minutes]:
net/http.(*persistConn).readLoop(0xc0014387e0)
	/usr/local/go/src/net/http/transport.go:2261 +0xd3a
created by net/http.(*Transport).dialConn in goroutine 208
	/usr/local/go/src/net/http/transport.go:1799 +0x152f

                                                
                                                
goroutine 218 [select, 115 minutes]:
net/http.(*persistConn).writeLoop(0xc0014387e0)
	/usr/local/go/src/net/http/transport.go:2458 +0xf0
created by net/http.(*Transport).dialConn in goroutine 208
	/usr/local/go/src/net/http/transport.go:1800 +0x1585

                                                
                                                
goroutine 189 [sync.Cond.Wait, 5 minutes]:
sync.runtime_notifyListWait(0xc000a13350, 0x2c)
	/usr/local/go/src/runtime/sema.go:569 +0x159
sync.(*Cond).Wait(0x2148ba0?)
	/usr/local/go/src/sync/cond.go:70 +0x85
k8s.io/client-go/util/workqueue.(*Type).Get(0xc000900cc0)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.3/util/workqueue/queue.go:200 +0x93
k8s.io/client-go/transport.(*dynamicClientCert).processNextWorkItem(0xc000a133c0)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.3/transport/cert_rotation.go:156 +0x47
k8s.io/client-go/transport.(*dynamicClientCert).runWorker(...)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.3/transport/cert_rotation.go:151
k8s.io/apimachinery/pkg/util/wait.BackoffUntil.func1(0x30?)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.3/pkg/util/wait/backoff.go:226 +0x33
k8s.io/apimachinery/pkg/util/wait.BackoffUntil(0xc0005a49f0, {0x369aac0, 0xc0009069c0}, 0x1, 0xc0000602a0)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.3/pkg/util/wait/backoff.go:227 +0xaf
k8s.io/apimachinery/pkg/util/wait.JitterUntil(0xc0005a49f0, 0x3b9aca00, 0x0, 0x1, 0xc0000602a0)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.3/pkg/util/wait/backoff.go:204 +0x7f
k8s.io/apimachinery/pkg/util/wait.Until(...)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.3/pkg/util/wait/backoff.go:161
created by k8s.io/client-go/transport.(*dynamicClientCert).Run in goroutine 197
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.3/transport/cert_rotation.go:140 +0x1ef

                                                
                                                
goroutine 963 [chan receive, 101 minutes]:
k8s.io/client-go/transport.(*dynamicClientCert).Run(0xc001a12800, 0xc0000602a0)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.3/transport/cert_rotation.go:147 +0x2a9
created by k8s.io/client-go/transport.(*tlsTransportCache).get in goroutine 894
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.3/transport/cache.go:122 +0x585

                                                
                                                
goroutine 190 [select, 5 minutes]:
k8s.io/apimachinery/pkg/util/wait.waitForWithContext({0x36bebf0, 0xc0000602a0}, 0xc000110f50, 0xc000985f98)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.3/pkg/util/wait/wait.go:205 +0xd1
k8s.io/apimachinery/pkg/util/wait.poll({0x36bebf0, 0xc0000602a0}, 0xa0?, 0xc000110f50, 0xc000110f98)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.3/pkg/util/wait/poll.go:260 +0x89
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext({0x36bebf0?, 0xc0000602a0?}, 0xc000110fd0?, 0x807e7b?)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.3/pkg/util/wait/poll.go:200 +0x53
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntil(0xc000110fd0?, 0x592e44?, 0xc000110fb8?)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.3/pkg/util/wait/poll.go:187 +0x3c
created by k8s.io/client-go/transport.(*dynamicClientCert).Run in goroutine 197
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.3/transport/cert_rotation.go:142 +0x29a

                                                
                                                
goroutine 196 [select]:
k8s.io/client-go/util/workqueue.(*delayingType).waitingLoop(0xc000900de0)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.3/util/workqueue/delaying_queue.go:276 +0x2ff
created by k8s.io/client-go/util/workqueue.newDelayingQueue in goroutine 181
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.3/util/workqueue/delaying_queue.go:113 +0x205

                                                
                                                
goroutine 191 [select, 5 minutes]:
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext.poller.func1.1()
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.3/pkg/util/wait/poll.go:297 +0x1b8
created by k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext.poller.func1 in goroutine 190
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.3/pkg/util/wait/poll.go:280 +0xbb

                                                
                                                
goroutine 235 [select, 115 minutes]:
net/http.(*persistConn).writeLoop(0xc00099fd40)
	/usr/local/go/src/net/http/transport.go:2458 +0xf0
created by net/http.(*Transport).dialConn in goroutine 219
	/usr/local/go/src/net/http/transport.go:1800 +0x1585

                                                
                                                
goroutine 805 [IO wait, 105 minutes]:
internal/poll.runtime_pollWait(0x7fc1b0206d80, 0x72)
	/usr/local/go/src/runtime/netpoll.go:345 +0x85
internal/poll.(*pollDesc).wait(0x11?, 0x3fe?, 0x0)
	/usr/local/go/src/internal/poll/fd_poll_runtime.go:84 +0x27
internal/poll.(*pollDesc).waitRead(...)
	/usr/local/go/src/internal/poll/fd_poll_runtime.go:89
internal/poll.(*FD).Accept(0xc00063c680)
	/usr/local/go/src/internal/poll/fd_unix.go:611 +0x2ac
net.(*netFD).accept(0xc00063c680)
	/usr/local/go/src/net/fd_unix.go:172 +0x29
net.(*TCPListener).accept(0xc000408e60)
	/usr/local/go/src/net/tcpsock_posix.go:159 +0x1e
net.(*TCPListener).Accept(0xc000408e60)
	/usr/local/go/src/net/tcpsock.go:327 +0x30
net/http.(*Server).Serve(0xc0007800f0, {0x36b1a80, 0xc000408e60})
	/usr/local/go/src/net/http/server.go:3260 +0x33e
net/http.(*Server).ListenAndServe(0xc0007800f0)
	/usr/local/go/src/net/http/server.go:3189 +0x71
k8s.io/minikube/test/integration.startHTTPProxy.func1(0xc0009d7a00?, 0xc0009d7a00)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/functional_test.go:2209 +0x18
created by k8s.io/minikube/test/integration.startHTTPProxy in goroutine 802
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/functional_test.go:2208 +0x129

                                                
                                                
goroutine 1575 [select, 100 minutes]:
net/http.(*persistConn).readLoop(0xc001c93200)
	/usr/local/go/src/net/http/transport.go:2261 +0xd3a
created by net/http.(*Transport).dialConn in goroutine 1573
	/usr/local/go/src/net/http/transport.go:1799 +0x152f

                                                
                                                
goroutine 3342 [select, 3 minutes]:
k8s.io/apimachinery/pkg/util/wait.waitForWithContext({0x36bebf0, 0xc0000602a0}, 0xc000111750, 0xc000111798)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.3/pkg/util/wait/wait.go:205 +0xd1
k8s.io/apimachinery/pkg/util/wait.poll({0x36bebf0, 0xc0000602a0}, 0x11?, 0xc000111750, 0xc000111798)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.3/pkg/util/wait/poll.go:260 +0x89
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext({0x36bebf0?, 0xc0000602a0?}, 0xc001345a00?, 0x551a60?)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.3/pkg/util/wait/poll.go:200 +0x53
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntil(0xc0001117d0?, 0x592e44?, 0xc00084b480?)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.3/pkg/util/wait/poll.go:187 +0x3c
created by k8s.io/client-go/transport.(*dynamicClientCert).Run in goroutine 3421
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.3/transport/cert_rotation.go:142 +0x29a

                                                
                                                
goroutine 2481 [chan receive, 39 minutes]:
testing.(*T).Run(0xc001410340, {0x265c089?, 0x55127c?}, 0xc001a665e8)
	/usr/local/go/src/testing/testing.go:1750 +0x3ab
k8s.io/minikube/test/integration.TestNetworkPlugins(0xc001410340)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/net_test.go:52 +0xd4
testing.tRunner(0xc001410340, 0x313f358)
	/usr/local/go/src/testing/testing.go:1689 +0xfb
created by testing.(*T).Run in goroutine 1
	/usr/local/go/src/testing/testing.go:1742 +0x390

                                                
                                                
goroutine 2559 [chan receive, 27 minutes]:
testing.(*T).Run(0xc001344680, {0x265d634?, 0x0?}, 0xc00090a400)
	/usr/local/go/src/testing/testing.go:1750 +0x3ab
k8s.io/minikube/test/integration.TestStartStop.func1.1(0xc001344680)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/start_stop_delete_test.go:130 +0xad9
testing.tRunner(0xc001344680, 0xc000a13500)
	/usr/local/go/src/testing/testing.go:1689 +0xfb
created by testing.(*T).Run in goroutine 2556
	/usr/local/go/src/testing/testing.go:1742 +0x390

                                                
                                                
goroutine 7968 [select]:
os/exec.(*Cmd).watchCtx(0xc0014e2300, 0xc000599200)
	/usr/local/go/src/os/exec/exec.go:768 +0xb5
created by os/exec.(*Cmd).Start in goroutine 7965
	/usr/local/go/src/os/exec/exec.go:754 +0x976

                                                
                                                
goroutine 3160 [chan receive, 6 minutes]:
testing.(*T).Run(0xc001370820, {0x2682136?, 0x60400000004?}, 0xc001d9a000)
	/usr/local/go/src/testing/testing.go:1750 +0x3ab
k8s.io/minikube/test/integration.TestStartStop.func1.1.1(0xc001370820)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/start_stop_delete_test.go:155 +0x2af
testing.tRunner(0xc001370820, 0xc00084a180)
	/usr/local/go/src/testing/testing.go:1689 +0xfb
created by testing.(*T).Run in goroutine 2594
	/usr/local/go/src/testing/testing.go:1742 +0x390

                                                
                                                
goroutine 2737 [chan receive, 39 minutes]:
testing.(*testContext).waitParallel(0xc000a14820)
	/usr/local/go/src/testing/testing.go:1817 +0xac
testing.(*T).Parallel(0xc0009d7040)
	/usr/local/go/src/testing/testing.go:1484 +0x229
k8s.io/minikube/test/integration.MaybeParallel(0xc0009d7040)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/helpers_test.go:483 +0x34
k8s.io/minikube/test/integration.TestNetworkPlugins.func1.1(0xc0009d7040)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/net_test.go:106 +0x334
testing.tRunner(0xc0009d7040, 0xc00090ab80)
	/usr/local/go/src/testing/testing.go:1689 +0xfb
created by testing.(*T).Run in goroutine 2595
	/usr/local/go/src/testing/testing.go:1742 +0x390

                                                
                                                
goroutine 3421 [chan receive, 17 minutes]:
k8s.io/client-go/transport.(*dynamicClientCert).Run(0xc001924700, 0xc0000602a0)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.3/transport/cert_rotation.go:147 +0x2a9
created by k8s.io/client-go/transport.(*tlsTransportCache).get in goroutine 3419
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.3/transport/cache.go:122 +0x585

                                                
                                                
goroutine 3343 [select, 3 minutes]:
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext.poller.func1.1()
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.3/pkg/util/wait/poll.go:297 +0x1b8
created by k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext.poller.func1 in goroutine 3342
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.3/pkg/util/wait/poll.go:280 +0xbb

                                                
                                                
goroutine 2594 [chan receive, 30 minutes]:
testing.(*T).Run(0xc001344b60, {0x265d634?, 0x0?}, 0xc00084a180)
	/usr/local/go/src/testing/testing.go:1750 +0x3ab
k8s.io/minikube/test/integration.TestStartStop.func1.1(0xc001344b60)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/start_stop_delete_test.go:130 +0xad9
testing.tRunner(0xc001344b60, 0xc000a13640)
	/usr/local/go/src/testing/testing.go:1689 +0xfb
created by testing.(*T).Run in goroutine 2556
	/usr/local/go/src/testing/testing.go:1742 +0x390

                                                
                                                
goroutine 2596 [chan receive]:
testing.(*T).Run(0xc001345040, {0x265c08e?, 0x3693440?}, 0xc00187c2d0)
	/usr/local/go/src/testing/testing.go:1750 +0x3ab
k8s.io/minikube/test/integration.TestNetworkPlugins.func1.1(0xc001345040)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/net_test.go:111 +0x5de
testing.tRunner(0xc001345040, 0xc00084a280)
	/usr/local/go/src/testing/testing.go:1689 +0xfb
created by testing.(*T).Run in goroutine 2595
	/usr/local/go/src/testing/testing.go:1742 +0x390

                                                
                                                
goroutine 3236 [select, 3 minutes]:
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext.poller.func1.1()
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.3/pkg/util/wait/poll.go:297 +0x1b8
created by k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext.poller.func1 in goroutine 3235
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.3/pkg/util/wait/poll.go:280 +0xbb

                                                
                                                
goroutine 3234 [sync.Cond.Wait, 3 minutes]:
sync.runtime_notifyListWait(0xc001701a50, 0x5)
	/usr/local/go/src/runtime/sema.go:569 +0x159
sync.(*Cond).Wait(0x2148ba0?)
	/usr/local/go/src/sync/cond.go:70 +0x85
k8s.io/client-go/util/workqueue.(*Type).Get(0xc001464420)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.3/util/workqueue/queue.go:200 +0x93
k8s.io/client-go/transport.(*dynamicClientCert).processNextWorkItem(0xc001701a80)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.3/transport/cert_rotation.go:156 +0x47
k8s.io/client-go/transport.(*dynamicClientCert).runWorker(...)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.3/transport/cert_rotation.go:151
k8s.io/apimachinery/pkg/util/wait.BackoffUntil.func1(0x30?)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.3/pkg/util/wait/backoff.go:226 +0x33
k8s.io/apimachinery/pkg/util/wait.BackoffUntil(0xc00147c210, {0x369aac0, 0xc0020e4750}, 0x1, 0xc0000602a0)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.3/pkg/util/wait/backoff.go:227 +0xaf
k8s.io/apimachinery/pkg/util/wait.JitterUntil(0xc00147c210, 0x3b9aca00, 0x0, 0x1, 0xc0000602a0)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.3/pkg/util/wait/backoff.go:204 +0x7f
k8s.io/apimachinery/pkg/util/wait.Until(...)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.3/pkg/util/wait/backoff.go:161
created by k8s.io/client-go/transport.(*dynamicClientCert).Run in goroutine 3210
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.3/transport/cert_rotation.go:140 +0x1ef

                                                
                                                
goroutine 3314 [select, 2 minutes]:
k8s.io/apimachinery/pkg/util/wait.waitForWithContext({0x36bebf0, 0xc0000602a0}, 0xc001406f50, 0xc001406f98)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.3/pkg/util/wait/wait.go:205 +0xd1
k8s.io/apimachinery/pkg/util/wait.poll({0x36bebf0, 0xc0000602a0}, 0xe0?, 0xc001406f50, 0xc001406f98)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.3/pkg/util/wait/poll.go:260 +0x89
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext({0x36bebf0?, 0xc0000602a0?}, 0x6db57a?, 0x7b8e18?)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.3/pkg/util/wait/poll.go:200 +0x53
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntil(0xc001406fd0?, 0x592e44?, 0xc0019a21e0?)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.3/pkg/util/wait/poll.go:187 +0x3c
created by k8s.io/client-go/transport.(*dynamicClientCert).Run in goroutine 3305
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.3/transport/cert_rotation.go:142 +0x29a

                                                
                                                
goroutine 1576 [select, 100 minutes]:
net/http.(*persistConn).writeLoop(0xc001c93200)
	/usr/local/go/src/net/http/transport.go:2458 +0xf0
created by net/http.(*Transport).dialConn in goroutine 1573
	/usr/local/go/src/net/http/transport.go:1800 +0x1585

                                                
                                                
goroutine 3304 [select]:
k8s.io/client-go/util/workqueue.(*delayingType).waitingLoop(0xc001465860)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.3/util/workqueue/delaying_queue.go:276 +0x2ff
created by k8s.io/client-go/util/workqueue.newDelayingQueue in goroutine 3300
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.3/util/workqueue/delaying_queue.go:113 +0x205

                                                
                                                
goroutine 2595 [chan receive, 39 minutes]:
testing.tRunner.func1()
	/usr/local/go/src/testing/testing.go:1650 +0x4ab
testing.tRunner(0xc001344d00, 0xc001a665e8)
	/usr/local/go/src/testing/testing.go:1695 +0x134
created by testing.(*T).Run in goroutine 2481
	/usr/local/go/src/testing/testing.go:1742 +0x390

                                                
                                                
goroutine 6337 [select]:
k8s.io/apimachinery/pkg/util/wait.loopConditionUntilContext({0x36bea30, 0xc000469490}, {0x36b2140, 0xc000705140}, 0x1, 0x0, 0xc00151fb40)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.3/pkg/util/wait/loop.go:66 +0x1e6
k8s.io/apimachinery/pkg/util/wait.PollUntilContextTimeout({0x36bea30?, 0xc0005a00e0?}, 0x3b9aca00, 0xc00006fd38?, 0x1, 0xc00006fb40)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.3/pkg/util/wait/poll.go:48 +0xa5
k8s.io/minikube/test/integration.PodWait({0x36bea30, 0xc0005a00e0}, 0xc001371040, {0xc001538000, 0x1c}, {0x26820d2, 0x14}, {0x2699cad, 0x1c}, 0x7dba821800)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/helpers_test.go:371 +0x385
k8s.io/minikube/test/integration.validateAddonAfterStop({0x36bea30, 0xc0005a00e0}, 0xc001371040, {0xc001538000, 0x1c}, {0x2684fcc?, 0xc001aee760?}, {0x551133?, 0x4a170f?}, {0xc0005d6800, ...})
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/start_stop_delete_test.go:287 +0x13b
k8s.io/minikube/test/integration.TestStartStop.func1.1.1.1(0xc001371040)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/start_stop_delete_test.go:156 +0x66
testing.tRunner(0xc001371040, 0xc001d9a080)
	/usr/local/go/src/testing/testing.go:1689 +0xfb
created by testing.(*T).Run in goroutine 3270
	/usr/local/go/src/testing/testing.go:1742 +0x390

                                                
                                                
goroutine 2736 [chan receive, 39 minutes]:
testing.(*testContext).waitParallel(0xc000a14820)
	/usr/local/go/src/testing/testing.go:1817 +0xac
testing.(*T).Parallel(0xc0009d6d00)
	/usr/local/go/src/testing/testing.go:1484 +0x229
k8s.io/minikube/test/integration.MaybeParallel(0xc0009d6d00)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/helpers_test.go:483 +0x34
k8s.io/minikube/test/integration.TestNetworkPlugins.func1.1(0xc0009d6d00)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/net_test.go:106 +0x334
testing.tRunner(0xc0009d6d00, 0xc00090ab00)
	/usr/local/go/src/testing/testing.go:1689 +0xfb
created by testing.(*T).Run in goroutine 2595
	/usr/local/go/src/testing/testing.go:1742 +0x390

                                                
                                                
goroutine 2579 [chan receive, 39 minutes]:
testing.(*T).Run(0xc0014109c0, {0x265c089?, 0x551133?}, 0x313f578)
	/usr/local/go/src/testing/testing.go:1750 +0x3ab
k8s.io/minikube/test/integration.TestStartStop(0xc0014109c0)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/start_stop_delete_test.go:46 +0x35
testing.tRunner(0xc0014109c0, 0x313f3a0)
	/usr/local/go/src/testing/testing.go:1689 +0xfb
created by testing.(*T).Run in goroutine 1
	/usr/local/go/src/testing/testing.go:1742 +0x390

                                                
                                                
goroutine 3315 [select, 2 minutes]:
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext.poller.func1.1()
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.3/pkg/util/wait/poll.go:297 +0x1b8
created by k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext.poller.func1 in goroutine 3314
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.3/pkg/util/wait/poll.go:280 +0xbb

                                                
                                                
goroutine 1371 [chan send, 100 minutes]:
os/exec.(*Cmd).watchCtx(0xc001800600, 0xc001802540)
	/usr/local/go/src/os/exec/exec.go:793 +0x3ff
created by os/exec.(*Cmd).Start in goroutine 1370
	/usr/local/go/src/os/exec/exec.go:754 +0x976

                                                
                                                
goroutine 7966 [IO wait]:
internal/poll.runtime_pollWait(0x7fc1b0206c88, 0x72)
	/usr/local/go/src/runtime/netpoll.go:345 +0x85
internal/poll.(*pollDesc).wait(0xc001e125a0?, 0xc0017ae279?, 0x1)
	/usr/local/go/src/internal/poll/fd_poll_runtime.go:84 +0x27
internal/poll.(*pollDesc).waitRead(...)
	/usr/local/go/src/internal/poll/fd_poll_runtime.go:89
internal/poll.(*FD).Read(0xc001e125a0, {0xc0017ae279, 0x587, 0x587})
	/usr/local/go/src/internal/poll/fd_unix.go:164 +0x27a
os.(*File).read(...)
	/usr/local/go/src/os/file_posix.go:29
os.(*File).Read(0xc0008de248, {0xc0017ae279?, 0x7ffdb902c27a?, 0x21d?})
	/usr/local/go/src/os/file.go:118 +0x52
bytes.(*Buffer).ReadFrom(0xc00187c390, {0x3699560, 0xc0017de068})
	/usr/local/go/src/bytes/buffer.go:211 +0x98
io.copyBuffer({0x36996a0, 0xc00187c390}, {0x3699560, 0xc0017de068}, {0x0, 0x0, 0x0})
	/usr/local/go/src/io/io.go:415 +0x151
io.Copy(...)
	/usr/local/go/src/io/io.go:388
os.genericWriteTo(0xc0008de248?, {0x36996a0, 0xc00187c390})
	/usr/local/go/src/os/file.go:269 +0x58
os.(*File).WriteTo(0xc0008de248, {0x36996a0, 0xc00187c390})
	/usr/local/go/src/os/file.go:247 +0x9c
io.copyBuffer({0x36996a0, 0xc00187c390}, {0x36995c0, 0xc0008de248}, {0x0, 0x0, 0x0})
	/usr/local/go/src/io/io.go:411 +0x9d
io.Copy(...)
	/usr/local/go/src/io/io.go:388
os/exec.(*Cmd).writerDescriptor.func1()
	/usr/local/go/src/os/exec/exec.go:578 +0x34
os/exec.(*Cmd).Start.func2(0xc00187c2d0?)
	/usr/local/go/src/os/exec/exec.go:728 +0x2c
created by os/exec.(*Cmd).Start in goroutine 7965
	/usr/local/go/src/os/exec/exec.go:727 +0x9ae

                                                
                                                
goroutine 962 [select]:
k8s.io/client-go/util/workqueue.(*delayingType).waitingLoop(0xc0008fd440)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.3/util/workqueue/delaying_queue.go:276 +0x2ff
created by k8s.io/client-go/util/workqueue.newDelayingQueue in goroutine 894
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.3/util/workqueue/delaying_queue.go:113 +0x205

                                                
                                                
goroutine 1459 [chan send, 100 minutes]:
os/exec.(*Cmd).watchCtx(0xc001801b00, 0xc001803aa0)
	/usr/local/go/src/os/exec/exec.go:793 +0x3ff
created by os/exec.(*Cmd).Start in goroutine 881
	/usr/local/go/src/os/exec/exec.go:754 +0x976

                                                
                                                
goroutine 979 [select, 2 minutes]:
k8s.io/apimachinery/pkg/util/wait.waitForWithContext({0x36bebf0, 0xc0000602a0}, 0xc000096750, 0xc000986f98)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.3/pkg/util/wait/wait.go:205 +0xd1
k8s.io/apimachinery/pkg/util/wait.poll({0x36bebf0, 0xc0000602a0}, 0x40?, 0xc000096750, 0xc000096798)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.3/pkg/util/wait/poll.go:260 +0x89
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext({0x36bebf0?, 0xc0000602a0?}, 0xc0009d71e0?, 0x551a60?)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.3/pkg/util/wait/poll.go:200 +0x53
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntil(0xc0000967d0?, 0x592e44?, 0xc00092c840?)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.3/pkg/util/wait/poll.go:187 +0x3c
created by k8s.io/client-go/transport.(*dynamicClientCert).Run in goroutine 963
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.3/transport/cert_rotation.go:142 +0x29a

                                                
                                                
goroutine 980 [select, 2 minutes]:
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext.poller.func1.1()
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.3/pkg/util/wait/poll.go:297 +0x1b8
created by k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext.poller.func1 in goroutine 979
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.3/pkg/util/wait/poll.go:280 +0xbb

                                                
                                                
goroutine 2703 [chan receive, 39 minutes]:
testing.(*testContext).waitParallel(0xc000a14820)
	/usr/local/go/src/testing/testing.go:1817 +0xac
testing.(*T).Parallel(0xc000239a00)
	/usr/local/go/src/testing/testing.go:1484 +0x229
k8s.io/minikube/test/integration.MaybeParallel(0xc000239a00)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/helpers_test.go:483 +0x34
k8s.io/minikube/test/integration.TestNetworkPlugins.func1.1(0xc000239a00)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/net_test.go:106 +0x334
testing.tRunner(0xc000239a00, 0xc001d94680)
	/usr/local/go/src/testing/testing.go:1689 +0xfb
created by testing.(*T).Run in goroutine 2595
	/usr/local/go/src/testing/testing.go:1742 +0x390

                                                
                                                
goroutine 8115 [IO wait]:
internal/poll.runtime_pollWait(0x7fc1b0206a98, 0x72)
	/usr/local/go/src/runtime/netpoll.go:345 +0x85
internal/poll.(*pollDesc).wait(0xc001b35da0?, 0xc00179f4df?, 0x1)
	/usr/local/go/src/internal/poll/fd_poll_runtime.go:84 +0x27
internal/poll.(*pollDesc).waitRead(...)
	/usr/local/go/src/internal/poll/fd_poll_runtime.go:89
internal/poll.(*FD).Read(0xc001b35da0, {0xc00179f4df, 0xb21, 0xb21})
	/usr/local/go/src/internal/poll/fd_unix.go:164 +0x27a
os.(*File).read(...)
	/usr/local/go/src/os/file_posix.go:29
os.(*File).Read(0xc001526470, {0xc00179f4df?, 0x0?, 0x3e15?})
	/usr/local/go/src/os/file.go:118 +0x52
bytes.(*Buffer).ReadFrom(0xc0008dbda0, {0x3699560, 0xc000803190})
	/usr/local/go/src/bytes/buffer.go:211 +0x98
io.copyBuffer({0x36996a0, 0xc0008dbda0}, {0x3699560, 0xc000803190}, {0x0, 0x0, 0x0})
	/usr/local/go/src/io/io.go:415 +0x151
io.Copy(...)
	/usr/local/go/src/io/io.go:388
os.genericWriteTo(0xc001526470?, {0x36996a0, 0xc0008dbda0})
	/usr/local/go/src/os/file.go:269 +0x58
os.(*File).WriteTo(0xc001526470, {0x36996a0, 0xc0008dbda0})
	/usr/local/go/src/os/file.go:247 +0x9c
io.copyBuffer({0x36996a0, 0xc0008dbda0}, {0x36995c0, 0xc001526470}, {0x0, 0x0, 0x0})
	/usr/local/go/src/io/io.go:411 +0x9d
io.Copy(...)
	/usr/local/go/src/io/io.go:388
os/exec.(*Cmd).writerDescriptor.func1()
	/usr/local/go/src/os/exec/exec.go:578 +0x34
os/exec.(*Cmd).Start.func2(0xc0007d7130?)
	/usr/local/go/src/os/exec/exec.go:728 +0x2c
created by os/exec.(*Cmd).Start in goroutine 8081
	/usr/local/go/src/os/exec/exec.go:727 +0x9ae

                                                
                                                
goroutine 3305 [chan receive, 27 minutes]:
k8s.io/client-go/transport.(*dynamicClientCert).Run(0xc0013a4d40, 0xc0000602a0)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.3/transport/cert_rotation.go:147 +0x2a9
created by k8s.io/client-go/transport.(*tlsTransportCache).get in goroutine 3300
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.3/transport/cache.go:122 +0x585

                                                
                                                
goroutine 2704 [chan receive]:
testing.(*T).Run(0xc000239ba0, {0x265c08e?, 0x3693440?}, 0xc0008dbcb0)
	/usr/local/go/src/testing/testing.go:1750 +0x3ab
k8s.io/minikube/test/integration.TestNetworkPlugins.func1.1(0xc000239ba0)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/net_test.go:111 +0x5de
testing.tRunner(0xc000239ba0, 0xc001d94700)
	/usr/local/go/src/testing/testing.go:1689 +0xfb
created by testing.(*T).Run in goroutine 2595
	/usr/local/go/src/testing/testing.go:1742 +0x390

                                                
                                                
goroutine 8116 [select]:
os/exec.(*Cmd).watchCtx(0xc00141f080, 0xc000060fc0)
	/usr/local/go/src/os/exec/exec.go:768 +0xb5
created by os/exec.(*Cmd).Start in goroutine 8081
	/usr/local/go/src/os/exec/exec.go:754 +0x976

                                                
                                                
goroutine 2701 [chan receive, 39 minutes]:
testing.(*testContext).waitParallel(0xc000a14820)
	/usr/local/go/src/testing/testing.go:1817 +0xac
testing.(*T).Parallel(0xc0002396c0)
	/usr/local/go/src/testing/testing.go:1484 +0x229
k8s.io/minikube/test/integration.MaybeParallel(0xc0002396c0)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/helpers_test.go:483 +0x34
k8s.io/minikube/test/integration.TestNetworkPlugins.func1.1(0xc0002396c0)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/net_test.go:106 +0x334
testing.tRunner(0xc0002396c0, 0xc001d94580)
	/usr/local/go/src/testing/testing.go:1689 +0xfb
created by testing.(*T).Run in goroutine 2595
	/usr/local/go/src/testing/testing.go:1742 +0x390

                                                
                                                
goroutine 2556 [chan receive]:
testing.tRunner.func1()
	/usr/local/go/src/testing/testing.go:1650 +0x4ab
testing.tRunner(0xc001344000, 0x313f578)
	/usr/local/go/src/testing/testing.go:1695 +0x134
created by testing.(*T).Run in goroutine 2579
	/usr/local/go/src/testing/testing.go:1742 +0x390

                                                
                                                
goroutine 3249 [sync.Cond.Wait, 2 minutes]:
sync.runtime_notifyListWait(0xc0013a4d10, 0x5)
	/usr/local/go/src/runtime/sema.go:569 +0x159
sync.(*Cond).Wait(0x2148ba0?)
	/usr/local/go/src/sync/cond.go:70 +0x85
k8s.io/client-go/util/workqueue.(*Type).Get(0xc001465740)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.3/util/workqueue/queue.go:200 +0x93
k8s.io/client-go/transport.(*dynamicClientCert).processNextWorkItem(0xc0013a4d40)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.3/transport/cert_rotation.go:156 +0x47
k8s.io/client-go/transport.(*dynamicClientCert).runWorker(...)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.3/transport/cert_rotation.go:151
k8s.io/apimachinery/pkg/util/wait.BackoffUntil.func1(0x30?)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.3/pkg/util/wait/backoff.go:226 +0x33
k8s.io/apimachinery/pkg/util/wait.BackoffUntil(0xc0014723d0, {0x369aac0, 0xc001a42150}, 0x1, 0xc0000602a0)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.3/pkg/util/wait/backoff.go:227 +0xaf
k8s.io/apimachinery/pkg/util/wait.JitterUntil(0xc0014723d0, 0x3b9aca00, 0x0, 0x1, 0xc0000602a0)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.3/pkg/util/wait/backoff.go:204 +0x7f
k8s.io/apimachinery/pkg/util/wait.Until(...)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.3/pkg/util/wait/backoff.go:161
created by k8s.io/client-go/transport.(*dynamicClientCert).Run in goroutine 3305
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.3/transport/cert_rotation.go:140 +0x1ef

                                                
                                                
goroutine 8081 [syscall]:
syscall.Syscall6(0xf7, 0x1, 0x1292a, 0xc001a1dbd0, 0x1000004, 0x0, 0x0)
	/usr/local/go/src/syscall/syscall_linux.go:91 +0x39
os.(*Process).blockUntilWaitable(0xc001777e00)
	/usr/local/go/src/os/wait_waitid.go:32 +0x76
os.(*Process).wait(0xc001777e00)
	/usr/local/go/src/os/exec_unix.go:22 +0x25
os.(*Process).Wait(...)
	/usr/local/go/src/os/exec.go:134
os/exec.(*Cmd).Wait(0xc00141f080)
	/usr/local/go/src/os/exec/exec.go:901 +0x45
os/exec.(*Cmd).Run(0xc00141f080)
	/usr/local/go/src/os/exec/exec.go:608 +0x2d
k8s.io/minikube/test/integration.Run(0xc000239520, 0xc00141f080)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/helpers_test.go:103 +0x1e5
k8s.io/minikube/test/integration.TestNetworkPlugins.func1.1.1(0xc000239520)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/net_test.go:112 +0x52
testing.tRunner(0xc000239520, 0xc0008dbcb0)
	/usr/local/go/src/testing/testing.go:1689 +0xfb
created by testing.(*T).Run in goroutine 2704
	/usr/local/go/src/testing/testing.go:1742 +0x390

                                                
                                                
goroutine 8114 [IO wait]:
internal/poll.runtime_pollWait(0x7fc1b0206b90, 0x72)
	/usr/local/go/src/runtime/netpoll.go:345 +0x85
internal/poll.(*pollDesc).wait(0xc001b35ce0?, 0xc00084da26?, 0x1)
	/usr/local/go/src/internal/poll/fd_poll_runtime.go:84 +0x27
internal/poll.(*pollDesc).waitRead(...)
	/usr/local/go/src/internal/poll/fd_poll_runtime.go:89
internal/poll.(*FD).Read(0xc001b35ce0, {0xc00084da26, 0x5da, 0x5da})
	/usr/local/go/src/internal/poll/fd_unix.go:164 +0x27a
os.(*File).read(...)
	/usr/local/go/src/os/file_posix.go:29
os.(*File).Read(0xc001526458, {0xc00084da26?, 0x7ffdb902c27a?, 0x226?})
	/usr/local/go/src/os/file.go:118 +0x52
bytes.(*Buffer).ReadFrom(0xc0008dbd70, {0x3699560, 0xc0008de960})
	/usr/local/go/src/bytes/buffer.go:211 +0x98
io.copyBuffer({0x36996a0, 0xc0008dbd70}, {0x3699560, 0xc0008de960}, {0x0, 0x0, 0x0})
	/usr/local/go/src/io/io.go:415 +0x151
io.Copy(...)
	/usr/local/go/src/io/io.go:388
os.genericWriteTo(0xc001526458?, {0x36996a0, 0xc0008dbd70})
	/usr/local/go/src/os/file.go:269 +0x58
os.(*File).WriteTo(0xc001526458, {0x36996a0, 0xc0008dbd70})
	/usr/local/go/src/os/file.go:247 +0x9c
io.copyBuffer({0x36996a0, 0xc0008dbd70}, {0x36995c0, 0xc001526458}, {0x0, 0x0, 0x0})
	/usr/local/go/src/io/io.go:411 +0x9d
io.Copy(...)
	/usr/local/go/src/io/io.go:388
os/exec.(*Cmd).writerDescriptor.func1()
	/usr/local/go/src/os/exec/exec.go:578 +0x34
os/exec.(*Cmd).Start.func2(0xc0008dbcb0?)
	/usr/local/go/src/os/exec/exec.go:728 +0x2c
created by os/exec.(*Cmd).Start in goroutine 8081
	/usr/local/go/src/os/exec/exec.go:727 +0x9ae

                                                
                                                
goroutine 2702 [chan receive, 39 minutes]:
testing.(*testContext).waitParallel(0xc000a14820)
	/usr/local/go/src/testing/testing.go:1817 +0xac
testing.(*T).Parallel(0xc000239860)
	/usr/local/go/src/testing/testing.go:1484 +0x229
k8s.io/minikube/test/integration.MaybeParallel(0xc000239860)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/helpers_test.go:483 +0x34
k8s.io/minikube/test/integration.TestNetworkPlugins.func1.1(0xc000239860)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/net_test.go:106 +0x334
testing.tRunner(0xc000239860, 0xc001d94600)
	/usr/local/go/src/testing/testing.go:1689 +0xfb
created by testing.(*T).Run in goroutine 2595
	/usr/local/go/src/testing/testing.go:1742 +0x390

                                                
                                                
goroutine 3270 [chan receive, 6 minutes]:
testing.(*T).Run(0xc0009d71e0, {0x2682136?, 0x60400000004?}, 0xc001d9a080)
	/usr/local/go/src/testing/testing.go:1750 +0x3ab
k8s.io/minikube/test/integration.TestStartStop.func1.1.1(0xc0009d71e0)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/start_stop_delete_test.go:155 +0x2af
testing.tRunner(0xc0009d71e0, 0xc00090a400)
	/usr/local/go/src/testing/testing.go:1689 +0xfb
created by testing.(*T).Run in goroutine 2559
	/usr/local/go/src/testing/testing.go:1742 +0x390

                                                
                                                
goroutine 3645 [IO wait]:
internal/poll.runtime_pollWait(0x7fc1b0206e78, 0x72)
	/usr/local/go/src/runtime/netpoll.go:345 +0x85
internal/poll.(*pollDesc).wait(0xc00152bb80?, 0xc001492000?, 0x0)
	/usr/local/go/src/internal/poll/fd_poll_runtime.go:84 +0x27
internal/poll.(*pollDesc).waitRead(...)
	/usr/local/go/src/internal/poll/fd_poll_runtime.go:89
internal/poll.(*FD).Read(0xc00152bb80, {0xc001492000, 0x800, 0x800})
	/usr/local/go/src/internal/poll/fd_unix.go:164 +0x27a
net.(*netFD).Read(0xc00152bb80, {0xc001492000?, 0x7fc1a0f44708?, 0xc0019f0fa8?})
	/usr/local/go/src/net/fd_posix.go:55 +0x25
net.(*conn).Read(0xc001526008, {0xc001492000?, 0xc000987938?, 0x41469b?})
	/usr/local/go/src/net/net.go:185 +0x45
crypto/tls.(*atLeastReader).Read(0xc0019f0fa8, {0xc001492000?, 0x0?, 0xc0019f0fa8?})
	/usr/local/go/src/crypto/tls/conn.go:806 +0x3b
bytes.(*Buffer).ReadFrom(0xc000005430, {0x369b260, 0xc0019f0fa8})
	/usr/local/go/src/bytes/buffer.go:211 +0x98
crypto/tls.(*Conn).readFromUntil(0xc000005188, {0x369a640, 0xc001526008}, 0xc000987980?)
	/usr/local/go/src/crypto/tls/conn.go:828 +0xde
crypto/tls.(*Conn).readRecordOrCCS(0xc000005188, 0x0)
	/usr/local/go/src/crypto/tls/conn.go:626 +0x3cf
crypto/tls.(*Conn).readRecord(...)
	/usr/local/go/src/crypto/tls/conn.go:588
crypto/tls.(*Conn).Read(0xc000005188, {0xc0014fd000, 0x1000, 0xc0018e6fc0?})
	/usr/local/go/src/crypto/tls/conn.go:1370 +0x156
bufio.(*Reader).Read(0xc000900480, {0xc0018dc120, 0x9, 0x4991c30?})
	/usr/local/go/src/bufio/bufio.go:241 +0x197
io.ReadAtLeast({0x3699740, 0xc000900480}, {0xc0018dc120, 0x9, 0x9}, 0x9)
	/usr/local/go/src/io/io.go:335 +0x90
io.ReadFull(...)
	/usr/local/go/src/io/io.go:354
golang.org/x/net/http2.readFrameHeader({0xc0018dc120, 0x9, 0x987dc0?}, {0x3699740?, 0xc000900480?})
	/var/lib/jenkins/go/pkg/mod/golang.org/x/net@v0.27.0/http2/frame.go:237 +0x65
golang.org/x/net/http2.(*Framer).ReadFrame(0xc0018dc0e0)
	/var/lib/jenkins/go/pkg/mod/golang.org/x/net@v0.27.0/http2/frame.go:501 +0x85
golang.org/x/net/http2.(*clientConnReadLoop).run(0xc000987fa8)
	/var/lib/jenkins/go/pkg/mod/golang.org/x/net@v0.27.0/http2/transport.go:2354 +0xda
golang.org/x/net/http2.(*ClientConn).readLoop(0xc000002780)
	/var/lib/jenkins/go/pkg/mod/golang.org/x/net@v0.27.0/http2/transport.go:2250 +0x8b
created by golang.org/x/net/http2.(*Transport).newClientConn in goroutine 3644
	/var/lib/jenkins/go/pkg/mod/golang.org/x/net@v0.27.0/http2/transport.go:865 +0xcfb

                                                
                                                
goroutine 3420 [select]:
k8s.io/client-go/util/workqueue.(*delayingType).waitingLoop(0xc00198bbc0)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.3/util/workqueue/delaying_queue.go:276 +0x2ff
created by k8s.io/client-go/util/workqueue.newDelayingQueue in goroutine 3419
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.3/util/workqueue/delaying_queue.go:113 +0x205

                                                
                                                
goroutine 3210 [chan receive, 28 minutes]:
k8s.io/client-go/transport.(*dynamicClientCert).Run(0xc001701a80, 0xc0000602a0)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.3/transport/cert_rotation.go:147 +0x2a9
created by k8s.io/client-go/transport.(*tlsTransportCache).get in goroutine 3182
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.3/transport/cache.go:122 +0x585

                                                
                                                
goroutine 3209 [select]:
k8s.io/client-go/util/workqueue.(*delayingType).waitingLoop(0xc001464540)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.3/util/workqueue/delaying_queue.go:276 +0x2ff
created by k8s.io/client-go/util/workqueue.newDelayingQueue in goroutine 3182
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.3/util/workqueue/delaying_queue.go:113 +0x205

                                                
                                                
goroutine 3663 [IO wait]:
internal/poll.runtime_pollWait(0x7fc1b0207068, 0x72)
	/usr/local/go/src/runtime/netpoll.go:345 +0x85
internal/poll.(*pollDesc).wait(0xc001610d80?, 0xc0006a7800?, 0x0)
	/usr/local/go/src/internal/poll/fd_poll_runtime.go:84 +0x27
internal/poll.(*pollDesc).waitRead(...)
	/usr/local/go/src/internal/poll/fd_poll_runtime.go:89
internal/poll.(*FD).Read(0xc001610d80, {0xc0006a7800, 0x800, 0x800})
	/usr/local/go/src/internal/poll/fd_unix.go:164 +0x27a
net.(*netFD).Read(0xc001610d80, {0xc0006a7800?, 0xc0000daa00?, 0x2?})
	/usr/local/go/src/net/fd_posix.go:55 +0x25
net.(*conn).Read(0xc0008de138, {0xc0006a7800?, 0xc0006a7805?, 0x70?})
	/usr/local/go/src/net/net.go:185 +0x45
crypto/tls.(*atLeastReader).Read(0xc0017c4c60, {0xc0006a7800?, 0x0?, 0xc0017c4c60?})
	/usr/local/go/src/crypto/tls/conn.go:806 +0x3b
bytes.(*Buffer).ReadFrom(0xc0002a77b0, {0x369b260, 0xc0017c4c60})
	/usr/local/go/src/bytes/buffer.go:211 +0x98
crypto/tls.(*Conn).readFromUntil(0xc0002a7508, {0x7fc1a05569d8, 0xc00139a000}, 0xc000992980?)
	/usr/local/go/src/crypto/tls/conn.go:828 +0xde
crypto/tls.(*Conn).readRecordOrCCS(0xc0002a7508, 0x0)
	/usr/local/go/src/crypto/tls/conn.go:626 +0x3cf
crypto/tls.(*Conn).readRecord(...)
	/usr/local/go/src/crypto/tls/conn.go:588
crypto/tls.(*Conn).Read(0xc0002a7508, {0xc0016fa000, 0x1000, 0xc001a93180?})
	/usr/local/go/src/crypto/tls/conn.go:1370 +0x156
bufio.(*Reader).Read(0xc00198b980, {0xc0002b24a0, 0x9, 0x4991c30?})
	/usr/local/go/src/bufio/bufio.go:241 +0x197
io.ReadAtLeast({0x3699740, 0xc00198b980}, {0xc0002b24a0, 0x9, 0x9}, 0x9)
	/usr/local/go/src/io/io.go:335 +0x90
io.ReadFull(...)
	/usr/local/go/src/io/io.go:354
golang.org/x/net/http2.readFrameHeader({0xc0002b24a0, 0x9, 0x992dc0?}, {0x3699740?, 0xc00198b980?})
	/var/lib/jenkins/go/pkg/mod/golang.org/x/net@v0.27.0/http2/frame.go:237 +0x65
golang.org/x/net/http2.(*Framer).ReadFrame(0xc0002b2460)
	/var/lib/jenkins/go/pkg/mod/golang.org/x/net@v0.27.0/http2/frame.go:501 +0x85
golang.org/x/net/http2.(*clientConnReadLoop).run(0xc000992fa8)
	/var/lib/jenkins/go/pkg/mod/golang.org/x/net@v0.27.0/http2/transport.go:2354 +0xda
golang.org/x/net/http2.(*ClientConn).readLoop(0xc0014e2180)
	/var/lib/jenkins/go/pkg/mod/golang.org/x/net@v0.27.0/http2/transport.go:2250 +0x8b
created by golang.org/x/net/http2.(*Transport).newClientConn in goroutine 3662
	/var/lib/jenkins/go/pkg/mod/golang.org/x/net@v0.27.0/http2/transport.go:865 +0xcfb

                                                
                                                
goroutine 7967 [IO wait]:
internal/poll.runtime_pollWait(0x7fc1b0207350, 0x72)
	/usr/local/go/src/runtime/netpoll.go:345 +0x85
internal/poll.(*pollDesc).wait(0xc001e12660?, 0xc002432400?, 0x1)
	/usr/local/go/src/internal/poll/fd_poll_runtime.go:84 +0x27
internal/poll.(*pollDesc).waitRead(...)
	/usr/local/go/src/internal/poll/fd_poll_runtime.go:89
internal/poll.(*FD).Read(0xc001e12660, {0xc002432400, 0x7c00, 0x7c00})
	/usr/local/go/src/internal/poll/fd_unix.go:164 +0x27a
os.(*File).read(...)
	/usr/local/go/src/os/file_posix.go:29
os.(*File).Read(0xc0008de278, {0xc002432400?, 0x0?, 0xfe01?})
	/usr/local/go/src/os/file.go:118 +0x52
bytes.(*Buffer).ReadFrom(0xc00187c3c0, {0x3699560, 0xc0015260d8})
	/usr/local/go/src/bytes/buffer.go:211 +0x98
io.copyBuffer({0x36996a0, 0xc00187c3c0}, {0x3699560, 0xc0015260d8}, {0x0, 0x0, 0x0})
	/usr/local/go/src/io/io.go:415 +0x151
io.Copy(...)
	/usr/local/go/src/io/io.go:388
os.genericWriteTo(0xc0008de278?, {0x36996a0, 0xc00187c3c0})
	/usr/local/go/src/os/file.go:269 +0x58
os.(*File).WriteTo(0xc0008de278, {0x36996a0, 0xc00187c3c0})
	/usr/local/go/src/os/file.go:247 +0x9c
io.copyBuffer({0x36996a0, 0xc00187c3c0}, {0x36995c0, 0xc0008de278}, {0x0, 0x0, 0x0})
	/usr/local/go/src/io/io.go:411 +0x9d
io.Copy(...)
	/usr/local/go/src/io/io.go:388
os/exec.(*Cmd).writerDescriptor.func1()
	/usr/local/go/src/os/exec/exec.go:578 +0x34
os/exec.(*Cmd).Start.func2(0xc0014fbad0?)
	/usr/local/go/src/os/exec/exec.go:728 +0x2c
created by os/exec.(*Cmd).Start in goroutine 7965
	/usr/local/go/src/os/exec/exec.go:727 +0x9ae

                                                
                                    

Test pass (179/230)

Order passed test Duration
3 TestDownloadOnly/v1.20.0/json-events 55.7
4 TestDownloadOnly/v1.20.0/preload-exists 0
8 TestDownloadOnly/v1.20.0/LogsDuration 0.06
9 TestDownloadOnly/v1.20.0/DeleteAll 0.13
10 TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds 0.12
12 TestDownloadOnly/v1.30.3/json-events 14.37
13 TestDownloadOnly/v1.30.3/preload-exists 0
17 TestDownloadOnly/v1.30.3/LogsDuration 0.06
18 TestDownloadOnly/v1.30.3/DeleteAll 0.13
19 TestDownloadOnly/v1.30.3/DeleteAlwaysSucceeds 0.13
21 TestDownloadOnly/v1.31.0-rc.0/json-events 53.88
22 TestDownloadOnly/v1.31.0-rc.0/preload-exists 0
26 TestDownloadOnly/v1.31.0-rc.0/LogsDuration 0.06
27 TestDownloadOnly/v1.31.0-rc.0/DeleteAll 0.13
28 TestDownloadOnly/v1.31.0-rc.0/DeleteAlwaysSucceeds 0.12
30 TestBinaryMirror 0.56
31 TestOffline 100.69
34 TestAddons/PreSetup/EnablingAddonOnNonExistingCluster 0.05
35 TestAddons/PreSetup/DisablingAddonOnNonExistingCluster 0.05
36 TestAddons/Setup 207.49
40 TestAddons/serial/GCPAuth/Namespaces 0.15
42 TestAddons/parallel/Registry 17.5
44 TestAddons/parallel/InspektorGadget 11.16
46 TestAddons/parallel/HelmTiller 15.2
48 TestAddons/parallel/CSI 69.81
49 TestAddons/parallel/Headlamp 19.84
50 TestAddons/parallel/CloudSpanner 6.54
51 TestAddons/parallel/LocalPath 56.11
52 TestAddons/parallel/NvidiaDevicePlugin 5.54
53 TestAddons/parallel/Yakd 10.76
55 TestCertOptions 48.55
56 TestCertExpiration 411.16
58 TestForceSystemdFlag 61.36
59 TestForceSystemdEnv 86.33
61 TestKVMDriverInstallOrUpdate 4.69
65 TestErrorSpam/setup 45.92
66 TestErrorSpam/start 0.33
67 TestErrorSpam/status 0.73
68 TestErrorSpam/pause 1.6
69 TestErrorSpam/unpause 1.64
70 TestErrorSpam/stop 5.43
73 TestFunctional/serial/CopySyncFile 0
74 TestFunctional/serial/StartWithProxy 98.25
75 TestFunctional/serial/AuditLog 0
76 TestFunctional/serial/SoftStart 42.41
77 TestFunctional/serial/KubeContext 0.04
78 TestFunctional/serial/KubectlGetPods 0.08
81 TestFunctional/serial/CacheCmd/cache/add_remote 3.03
82 TestFunctional/serial/CacheCmd/cache/add_local 2.22
83 TestFunctional/serial/CacheCmd/cache/CacheDelete 0.04
84 TestFunctional/serial/CacheCmd/cache/list 0.05
85 TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node 0.22
86 TestFunctional/serial/CacheCmd/cache/cache_reload 1.63
87 TestFunctional/serial/CacheCmd/cache/delete 0.09
88 TestFunctional/serial/MinikubeKubectlCmd 0.1
89 TestFunctional/serial/MinikubeKubectlCmdDirectly 0.1
90 TestFunctional/serial/ExtraConfig 60.05
91 TestFunctional/serial/ComponentHealth 0.06
92 TestFunctional/serial/LogsCmd 1.47
93 TestFunctional/serial/LogsFileCmd 1.58
94 TestFunctional/serial/InvalidService 3.56
96 TestFunctional/parallel/ConfigCmd 0.34
97 TestFunctional/parallel/DashboardCmd 12.34
98 TestFunctional/parallel/DryRun 0.26
99 TestFunctional/parallel/InternationalLanguage 0.14
100 TestFunctional/parallel/StatusCmd 0.79
104 TestFunctional/parallel/ServiceCmdConnect 60.45
105 TestFunctional/parallel/AddonsCmd 0.13
108 TestFunctional/parallel/SSHCmd 0.4
109 TestFunctional/parallel/CpCmd 1.28
110 TestFunctional/parallel/MySQL 46.33
111 TestFunctional/parallel/FileSync 0.19
112 TestFunctional/parallel/CertSync 1.27
116 TestFunctional/parallel/NodeLabels 0.06
118 TestFunctional/parallel/NonActiveRuntimeDisabled 0.38
120 TestFunctional/parallel/License 0.61
121 TestFunctional/parallel/UpdateContextCmd/no_changes 0.09
122 TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster 0.09
123 TestFunctional/parallel/UpdateContextCmd/no_clusters 0.09
124 TestFunctional/parallel/ServiceCmd/DeployApp 61.21
134 TestFunctional/parallel/Version/short 0.04
135 TestFunctional/parallel/Version/components 0.82
136 TestFunctional/parallel/ImageCommands/ImageListShort 0.21
137 TestFunctional/parallel/ImageCommands/ImageListTable 0.25
138 TestFunctional/parallel/ImageCommands/ImageListJson 0.29
139 TestFunctional/parallel/ImageCommands/ImageListYaml 0.2
140 TestFunctional/parallel/ImageCommands/ImageBuild 3.56
141 TestFunctional/parallel/ImageCommands/Setup 1.92
142 TestFunctional/parallel/ImageCommands/ImageLoadDaemon 1.26
143 TestFunctional/parallel/ImageCommands/ImageReloadDaemon 0.89
144 TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon 1.74
145 TestFunctional/parallel/ImageCommands/ImageSaveToFile 0.5
146 TestFunctional/parallel/ImageCommands/ImageRemove 0.45
147 TestFunctional/parallel/ImageCommands/ImageLoadFromFile 0.74
148 TestFunctional/parallel/ImageCommands/ImageSaveDaemon 0.57
149 TestFunctional/parallel/ProfileCmd/profile_not_create 0.27
150 TestFunctional/parallel/ProfileCmd/profile_list 0.25
151 TestFunctional/parallel/ProfileCmd/profile_json_output 0.24
152 TestFunctional/parallel/MountCmd/any-port 8.4
153 TestFunctional/parallel/ServiceCmd/List 0.42
154 TestFunctional/parallel/ServiceCmd/JSONOutput 0.43
155 TestFunctional/parallel/ServiceCmd/HTTPS 0.31
156 TestFunctional/parallel/ServiceCmd/Format 0.3
157 TestFunctional/parallel/ServiceCmd/URL 0.33
158 TestFunctional/parallel/MountCmd/specific-port 1.75
159 TestFunctional/parallel/MountCmd/VerifyCleanup 1.5
160 TestFunctional/delete_echo-server_images 0.04
161 TestFunctional/delete_my-image_image 0.02
162 TestFunctional/delete_minikube_cached_images 0.02
166 TestMultiControlPlane/serial/StartCluster 215.14
167 TestMultiControlPlane/serial/DeployApp 6.74
168 TestMultiControlPlane/serial/PingHostFromPods 1.21
169 TestMultiControlPlane/serial/AddWorkerNode 57.92
170 TestMultiControlPlane/serial/NodeLabels 0.07
171 TestMultiControlPlane/serial/HAppyAfterClusterStart 0.53
172 TestMultiControlPlane/serial/CopyFile 12.81
174 TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop 3.47
176 TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart 0.4
178 TestMultiControlPlane/serial/DeleteSecondaryNode 17.39
179 TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete 0.37
181 TestMultiControlPlane/serial/RestartCluster 289.21
182 TestMultiControlPlane/serial/DegradedAfterClusterRestart 0.37
183 TestMultiControlPlane/serial/AddSecondaryNode 78.31
184 TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd 0.54
188 TestJSONOutput/start/Command 97.54
189 TestJSONOutput/start/Audit 0
191 TestJSONOutput/start/parallel/DistinctCurrentSteps 0
192 TestJSONOutput/start/parallel/IncreasingCurrentSteps 0
194 TestJSONOutput/pause/Command 0.74
195 TestJSONOutput/pause/Audit 0
197 TestJSONOutput/pause/parallel/DistinctCurrentSteps 0
198 TestJSONOutput/pause/parallel/IncreasingCurrentSteps 0
200 TestJSONOutput/unpause/Command 0.63
201 TestJSONOutput/unpause/Audit 0
203 TestJSONOutput/unpause/parallel/DistinctCurrentSteps 0
204 TestJSONOutput/unpause/parallel/IncreasingCurrentSteps 0
206 TestJSONOutput/stop/Command 7.38
207 TestJSONOutput/stop/Audit 0
209 TestJSONOutput/stop/parallel/DistinctCurrentSteps 0
210 TestJSONOutput/stop/parallel/IncreasingCurrentSteps 0
211 TestErrorJSONOutput 0.19
216 TestMainNoArgs 0.04
217 TestMinikubeProfile 88.2
220 TestMountStart/serial/StartWithMountFirst 27.22
221 TestMountStart/serial/VerifyMountFirst 0.36
222 TestMountStart/serial/StartWithMountSecond 29.88
223 TestMountStart/serial/VerifyMountSecond 0.38
224 TestMountStart/serial/DeleteFirst 0.71
225 TestMountStart/serial/VerifyMountPostDelete 0.37
226 TestMountStart/serial/Stop 1.28
227 TestMountStart/serial/RestartStopped 22.4
228 TestMountStart/serial/VerifyMountPostStop 0.37
231 TestMultiNode/serial/FreshStart2Nodes 126.64
232 TestMultiNode/serial/DeployApp2Nodes 5.73
233 TestMultiNode/serial/PingHostFrom2Pods 0.79
234 TestMultiNode/serial/AddNode 49.18
235 TestMultiNode/serial/MultiNodeLabels 0.06
236 TestMultiNode/serial/ProfileList 0.22
237 TestMultiNode/serial/CopyFile 7.01
238 TestMultiNode/serial/StopNode 2.35
239 TestMultiNode/serial/StartAfterStop 38.71
241 TestMultiNode/serial/DeleteNode 2.26
243 TestMultiNode/serial/RestartMultiNode 629.1
244 TestMultiNode/serial/ValidateNameConflict 47.69
251 TestScheduledStopUnix 115.28
255 TestRunningBinaryUpgrade 259.88
267 TestNoKubernetes/serial/StartNoK8sWithVersion 0.07
271 TestNoKubernetes/serial/StartWithK8s 75.12
280 TestNoKubernetes/serial/StartWithStopK8s 41.9
281 TestNoKubernetes/serial/Start 74.42
283 TestPause/serial/Start 128.57
284 TestNoKubernetes/serial/VerifyK8sNotRunning 0.2
285 TestNoKubernetes/serial/ProfileList 1.74
286 TestNoKubernetes/serial/Stop 1.55
287 TestNoKubernetes/serial/StartNoArgs 47.55
288 TestNoKubernetes/serial/VerifyK8sNotRunningSecond 0.2
290 TestStoppedBinaryUpgrade/Setup 2.66
291 TestStoppedBinaryUpgrade/Upgrade 123.1
294 TestStoppedBinaryUpgrade/MinikubeLogs 0.9
x
+
TestDownloadOnly/v1.20.0/json-events (55.7s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/json-events
aaa_download_only_test.go:81: (dbg) Run:  out/minikube-linux-amd64 start -o=json --download-only -p download-only-754215 --force --alsologtostderr --kubernetes-version=v1.20.0 --container-runtime=crio --driver=kvm2  --container-runtime=crio
aaa_download_only_test.go:81: (dbg) Done: out/minikube-linux-amd64 start -o=json --download-only -p download-only-754215 --force --alsologtostderr --kubernetes-version=v1.20.0 --container-runtime=crio --driver=kvm2  --container-runtime=crio: (55.703187606s)
--- PASS: TestDownloadOnly/v1.20.0/json-events (55.70s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/preload-exists
--- PASS: TestDownloadOnly/v1.20.0/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/LogsDuration (0.06s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/LogsDuration
aaa_download_only_test.go:184: (dbg) Run:  out/minikube-linux-amd64 logs -p download-only-754215
aaa_download_only_test.go:184: (dbg) Non-zero exit: out/minikube-linux-amd64 logs -p download-only-754215: exit status 85 (55.551929ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	| Command |              Args              |       Profile        |  User   | Version |     Start Time      | End Time |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	| start   | -o=json --download-only        | download-only-754215 | jenkins | v1.33.1 | 05 Aug 24 22:47 UTC |          |
	|         | -p download-only-754215        |                      |         |         |                     |          |
	|         | --force --alsologtostderr      |                      |         |         |                     |          |
	|         | --kubernetes-version=v1.20.0   |                      |         |         |                     |          |
	|         | --container-runtime=crio       |                      |         |         |                     |          |
	|         | --driver=kvm2                  |                      |         |         |                     |          |
	|         | --container-runtime=crio       |                      |         |         |                     |          |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	
	
	==> Last Start <==
	Log file created at: 2024/08/05 22:47:42
	Running on machine: ubuntu-20-agent-10
	Binary: Built with gc go1.22.5 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0805 22:47:42.655185   16805 out.go:291] Setting OutFile to fd 1 ...
	I0805 22:47:42.655399   16805 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0805 22:47:42.655407   16805 out.go:304] Setting ErrFile to fd 2...
	I0805 22:47:42.655412   16805 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0805 22:47:42.655588   16805 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19373-9606/.minikube/bin
	W0805 22:47:42.655706   16805 root.go:314] Error reading config file at /home/jenkins/minikube-integration/19373-9606/.minikube/config/config.json: open /home/jenkins/minikube-integration/19373-9606/.minikube/config/config.json: no such file or directory
	I0805 22:47:42.656308   16805 out.go:298] Setting JSON to true
	I0805 22:47:42.657224   16805 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-10","uptime":1809,"bootTime":1722896254,"procs":173,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1065-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0805 22:47:42.657279   16805 start.go:139] virtualization: kvm guest
	I0805 22:47:42.659675   16805 out.go:97] [download-only-754215] minikube v1.33.1 on Ubuntu 20.04 (kvm/amd64)
	W0805 22:47:42.659769   16805 preload.go:293] Failed to list preload files: open /home/jenkins/minikube-integration/19373-9606/.minikube/cache/preloaded-tarball: no such file or directory
	I0805 22:47:42.659845   16805 notify.go:220] Checking for updates...
	I0805 22:47:42.661374   16805 out.go:169] MINIKUBE_LOCATION=19373
	I0805 22:47:42.662973   16805 out.go:169] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0805 22:47:42.664446   16805 out.go:169] KUBECONFIG=/home/jenkins/minikube-integration/19373-9606/kubeconfig
	I0805 22:47:42.666009   16805 out.go:169] MINIKUBE_HOME=/home/jenkins/minikube-integration/19373-9606/.minikube
	I0805 22:47:42.667498   16805 out.go:169] MINIKUBE_BIN=out/minikube-linux-amd64
	W0805 22:47:42.670090   16805 out.go:267] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I0805 22:47:42.670342   16805 driver.go:392] Setting default libvirt URI to qemu:///system
	I0805 22:47:42.772022   16805 out.go:97] Using the kvm2 driver based on user configuration
	I0805 22:47:42.772052   16805 start.go:297] selected driver: kvm2
	I0805 22:47:42.772057   16805 start.go:901] validating driver "kvm2" against <nil>
	I0805 22:47:42.772386   16805 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0805 22:47:42.772497   16805 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/19373-9606/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0805 22:47:42.787234   16805 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.33.1
	I0805 22:47:42.787286   16805 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0805 22:47:42.787788   16805 start_flags.go:393] Using suggested 6000MB memory alloc based on sys=32089MB, container=0MB
	I0805 22:47:42.787935   16805 start_flags.go:929] Wait components to verify : map[apiserver:true system_pods:true]
	I0805 22:47:42.787959   16805 cni.go:84] Creating CNI manager for ""
	I0805 22:47:42.787967   16805 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0805 22:47:42.787975   16805 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0805 22:47:42.788027   16805 start.go:340] cluster config:
	{Name:download-only-754215 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:6000 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:download-only-754215 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Cont
ainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0805 22:47:42.788218   16805 iso.go:125] acquiring lock: {Name:mk54a637ed625e04bb2b6adf973b61c976cd6d35 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0805 22:47:42.790191   16805 out.go:97] Downloading VM boot image ...
	I0805 22:47:42.790230   16805 download.go:107] Downloading: https://storage.googleapis.com/minikube-builds/iso/19339/minikube-v1.33.1-1722248113-19339-amd64.iso?checksum=file:https://storage.googleapis.com/minikube-builds/iso/19339/minikube-v1.33.1-1722248113-19339-amd64.iso.sha256 -> /home/jenkins/minikube-integration/19373-9606/.minikube/cache/iso/amd64/minikube-v1.33.1-1722248113-19339-amd64.iso
	I0805 22:47:54.165287   16805 out.go:97] Starting "download-only-754215" primary control-plane node in "download-only-754215" cluster
	I0805 22:47:54.165312   16805 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime crio
	I0805 22:47:54.280134   16805 preload.go:118] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.20.0/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4
	I0805 22:47:54.280181   16805 cache.go:56] Caching tarball of preloaded images
	I0805 22:47:54.280365   16805 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime crio
	I0805 22:47:54.282587   16805 out.go:97] Downloading Kubernetes v1.20.0 preload ...
	I0805 22:47:54.282606   16805 preload.go:236] getting checksum for preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4 ...
	I0805 22:47:54.392996   16805 download.go:107] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.20.0/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4?checksum=md5:f93b07cde9c3289306cbaeb7a1803c19 -> /home/jenkins/minikube-integration/19373-9606/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4
	I0805 22:48:08.285893   16805 preload.go:247] saving checksum for preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4 ...
	I0805 22:48:08.286662   16805 preload.go:254] verifying checksum of /home/jenkins/minikube-integration/19373-9606/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4 ...
	I0805 22:48:09.194843   16805 cache.go:59] Finished verifying existence of preloaded tar for v1.20.0 on crio
	I0805 22:48:09.195223   16805 profile.go:143] Saving config to /home/jenkins/minikube-integration/19373-9606/.minikube/profiles/download-only-754215/config.json ...
	I0805 22:48:09.195255   16805 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19373-9606/.minikube/profiles/download-only-754215/config.json: {Name:mka2062ef6462c3ea335e3e856992c5d33587503 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0805 22:48:09.195429   16805 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime crio
	I0805 22:48:09.195637   16805 download.go:107] Downloading: https://dl.k8s.io/release/v1.20.0/bin/linux/amd64/kubectl?checksum=file:https://dl.k8s.io/release/v1.20.0/bin/linux/amd64/kubectl.sha256 -> /home/jenkins/minikube-integration/19373-9606/.minikube/cache/linux/amd64/v1.20.0/kubectl
	
	
	* The control-plane node download-only-754215 host does not exist
	  To start a cluster, run: "minikube start -p download-only-754215"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:185: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.20.0/LogsDuration (0.06s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/DeleteAll (0.13s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/DeleteAll
aaa_download_only_test.go:197: (dbg) Run:  out/minikube-linux-amd64 delete --all
--- PASS: TestDownloadOnly/v1.20.0/DeleteAll (0.13s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds (0.12s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds
aaa_download_only_test.go:208: (dbg) Run:  out/minikube-linux-amd64 delete -p download-only-754215
--- PASS: TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds (0.12s)

                                                
                                    
x
+
TestDownloadOnly/v1.30.3/json-events (14.37s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.30.3/json-events
aaa_download_only_test.go:81: (dbg) Run:  out/minikube-linux-amd64 start -o=json --download-only -p download-only-769780 --force --alsologtostderr --kubernetes-version=v1.30.3 --container-runtime=crio --driver=kvm2  --container-runtime=crio
aaa_download_only_test.go:81: (dbg) Done: out/minikube-linux-amd64 start -o=json --download-only -p download-only-769780 --force --alsologtostderr --kubernetes-version=v1.30.3 --container-runtime=crio --driver=kvm2  --container-runtime=crio: (14.369259849s)
--- PASS: TestDownloadOnly/v1.30.3/json-events (14.37s)

                                                
                                    
x
+
TestDownloadOnly/v1.30.3/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.30.3/preload-exists
--- PASS: TestDownloadOnly/v1.30.3/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.30.3/LogsDuration (0.06s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.30.3/LogsDuration
aaa_download_only_test.go:184: (dbg) Run:  out/minikube-linux-amd64 logs -p download-only-769780
aaa_download_only_test.go:184: (dbg) Non-zero exit: out/minikube-linux-amd64 logs -p download-only-769780: exit status 85 (57.670696ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| Command |              Args              |       Profile        |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| start   | -o=json --download-only        | download-only-754215 | jenkins | v1.33.1 | 05 Aug 24 22:47 UTC |                     |
	|         | -p download-only-754215        |                      |         |         |                     |                     |
	|         | --force --alsologtostderr      |                      |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0   |                      |         |         |                     |                     |
	|         | --container-runtime=crio       |                      |         |         |                     |                     |
	|         | --driver=kvm2                  |                      |         |         |                     |                     |
	|         | --container-runtime=crio       |                      |         |         |                     |                     |
	| delete  | --all                          | minikube             | jenkins | v1.33.1 | 05 Aug 24 22:48 UTC | 05 Aug 24 22:48 UTC |
	| delete  | -p download-only-754215        | download-only-754215 | jenkins | v1.33.1 | 05 Aug 24 22:48 UTC | 05 Aug 24 22:48 UTC |
	| start   | -o=json --download-only        | download-only-769780 | jenkins | v1.33.1 | 05 Aug 24 22:48 UTC |                     |
	|         | -p download-only-769780        |                      |         |         |                     |                     |
	|         | --force --alsologtostderr      |                      |         |         |                     |                     |
	|         | --kubernetes-version=v1.30.3   |                      |         |         |                     |                     |
	|         | --container-runtime=crio       |                      |         |         |                     |                     |
	|         | --driver=kvm2                  |                      |         |         |                     |                     |
	|         | --container-runtime=crio       |                      |         |         |                     |                     |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/08/05 22:48:38
	Running on machine: ubuntu-20-agent-10
	Binary: Built with gc go1.22.5 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0805 22:48:38.671764   17193 out.go:291] Setting OutFile to fd 1 ...
	I0805 22:48:38.671884   17193 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0805 22:48:38.671893   17193 out.go:304] Setting ErrFile to fd 2...
	I0805 22:48:38.671897   17193 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0805 22:48:38.672080   17193 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19373-9606/.minikube/bin
	I0805 22:48:38.672631   17193 out.go:298] Setting JSON to true
	I0805 22:48:38.673429   17193 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-10","uptime":1865,"bootTime":1722896254,"procs":172,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1065-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0805 22:48:38.673489   17193 start.go:139] virtualization: kvm guest
	I0805 22:48:38.675647   17193 out.go:97] [download-only-769780] minikube v1.33.1 on Ubuntu 20.04 (kvm/amd64)
	I0805 22:48:38.675739   17193 notify.go:220] Checking for updates...
	I0805 22:48:38.677150   17193 out.go:169] MINIKUBE_LOCATION=19373
	I0805 22:48:38.678593   17193 out.go:169] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0805 22:48:38.680196   17193 out.go:169] KUBECONFIG=/home/jenkins/minikube-integration/19373-9606/kubeconfig
	I0805 22:48:38.681711   17193 out.go:169] MINIKUBE_HOME=/home/jenkins/minikube-integration/19373-9606/.minikube
	I0805 22:48:38.683261   17193 out.go:169] MINIKUBE_BIN=out/minikube-linux-amd64
	W0805 22:48:38.686128   17193 out.go:267] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I0805 22:48:38.686413   17193 driver.go:392] Setting default libvirt URI to qemu:///system
	I0805 22:48:38.717759   17193 out.go:97] Using the kvm2 driver based on user configuration
	I0805 22:48:38.717801   17193 start.go:297] selected driver: kvm2
	I0805 22:48:38.717810   17193 start.go:901] validating driver "kvm2" against <nil>
	I0805 22:48:38.718197   17193 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0805 22:48:38.718286   17193 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/19373-9606/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0805 22:48:38.732708   17193 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.33.1
	I0805 22:48:38.732764   17193 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0805 22:48:38.733411   17193 start_flags.go:393] Using suggested 6000MB memory alloc based on sys=32089MB, container=0MB
	I0805 22:48:38.733600   17193 start_flags.go:929] Wait components to verify : map[apiserver:true system_pods:true]
	I0805 22:48:38.733665   17193 cni.go:84] Creating CNI manager for ""
	I0805 22:48:38.733682   17193 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0805 22:48:38.733696   17193 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0805 22:48:38.733769   17193 start.go:340] cluster config:
	{Name:download-only-769780 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:6000 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.3 ClusterName:download-only-769780 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Cont
ainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0805 22:48:38.733886   17193 iso.go:125] acquiring lock: {Name:mk54a637ed625e04bb2b6adf973b61c976cd6d35 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0805 22:48:38.735524   17193 out.go:97] Starting "download-only-769780" primary control-plane node in "download-only-769780" cluster
	I0805 22:48:38.735548   17193 preload.go:131] Checking if preload exists for k8s version v1.30.3 and runtime crio
	I0805 22:48:38.842159   17193 preload.go:118] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.30.3/preloaded-images-k8s-v18-v1.30.3-cri-o-overlay-amd64.tar.lz4
	I0805 22:48:38.842191   17193 cache.go:56] Caching tarball of preloaded images
	I0805 22:48:38.842336   17193 preload.go:131] Checking if preload exists for k8s version v1.30.3 and runtime crio
	I0805 22:48:38.844226   17193 out.go:97] Downloading Kubernetes v1.30.3 preload ...
	I0805 22:48:38.844250   17193 preload.go:236] getting checksum for preloaded-images-k8s-v18-v1.30.3-cri-o-overlay-amd64.tar.lz4 ...
	I0805 22:48:38.956250   17193 download.go:107] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.30.3/preloaded-images-k8s-v18-v1.30.3-cri-o-overlay-amd64.tar.lz4?checksum=md5:15191286f02471d9b3ea0b587fcafc39 -> /home/jenkins/minikube-integration/19373-9606/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-cri-o-overlay-amd64.tar.lz4
	
	
	* The control-plane node download-only-769780 host does not exist
	  To start a cluster, run: "minikube start -p download-only-769780"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:185: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.30.3/LogsDuration (0.06s)

                                                
                                    
x
+
TestDownloadOnly/v1.30.3/DeleteAll (0.13s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.30.3/DeleteAll
aaa_download_only_test.go:197: (dbg) Run:  out/minikube-linux-amd64 delete --all
--- PASS: TestDownloadOnly/v1.30.3/DeleteAll (0.13s)

                                                
                                    
x
+
TestDownloadOnly/v1.30.3/DeleteAlwaysSucceeds (0.13s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.30.3/DeleteAlwaysSucceeds
aaa_download_only_test.go:208: (dbg) Run:  out/minikube-linux-amd64 delete -p download-only-769780
--- PASS: TestDownloadOnly/v1.30.3/DeleteAlwaysSucceeds (0.13s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.0-rc.0/json-events (53.88s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.0-rc.0/json-events
aaa_download_only_test.go:81: (dbg) Run:  out/minikube-linux-amd64 start -o=json --download-only -p download-only-068196 --force --alsologtostderr --kubernetes-version=v1.31.0-rc.0 --container-runtime=crio --driver=kvm2  --container-runtime=crio
aaa_download_only_test.go:81: (dbg) Done: out/minikube-linux-amd64 start -o=json --download-only -p download-only-068196 --force --alsologtostderr --kubernetes-version=v1.31.0-rc.0 --container-runtime=crio --driver=kvm2  --container-runtime=crio: (53.880919265s)
--- PASS: TestDownloadOnly/v1.31.0-rc.0/json-events (53.88s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.0-rc.0/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.0-rc.0/preload-exists
--- PASS: TestDownloadOnly/v1.31.0-rc.0/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.0-rc.0/LogsDuration (0.06s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.0-rc.0/LogsDuration
aaa_download_only_test.go:184: (dbg) Run:  out/minikube-linux-amd64 logs -p download-only-068196
aaa_download_only_test.go:184: (dbg) Non-zero exit: out/minikube-linux-amd64 logs -p download-only-068196: exit status 85 (56.155874ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	|---------|-----------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| Command |               Args                |       Profile        |  User   | Version |     Start Time      |      End Time       |
	|---------|-----------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| start   | -o=json --download-only           | download-only-754215 | jenkins | v1.33.1 | 05 Aug 24 22:47 UTC |                     |
	|         | -p download-only-754215           |                      |         |         |                     |                     |
	|         | --force --alsologtostderr         |                      |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0      |                      |         |         |                     |                     |
	|         | --container-runtime=crio          |                      |         |         |                     |                     |
	|         | --driver=kvm2                     |                      |         |         |                     |                     |
	|         | --container-runtime=crio          |                      |         |         |                     |                     |
	| delete  | --all                             | minikube             | jenkins | v1.33.1 | 05 Aug 24 22:48 UTC | 05 Aug 24 22:48 UTC |
	| delete  | -p download-only-754215           | download-only-754215 | jenkins | v1.33.1 | 05 Aug 24 22:48 UTC | 05 Aug 24 22:48 UTC |
	| start   | -o=json --download-only           | download-only-769780 | jenkins | v1.33.1 | 05 Aug 24 22:48 UTC |                     |
	|         | -p download-only-769780           |                      |         |         |                     |                     |
	|         | --force --alsologtostderr         |                      |         |         |                     |                     |
	|         | --kubernetes-version=v1.30.3      |                      |         |         |                     |                     |
	|         | --container-runtime=crio          |                      |         |         |                     |                     |
	|         | --driver=kvm2                     |                      |         |         |                     |                     |
	|         | --container-runtime=crio          |                      |         |         |                     |                     |
	| delete  | --all                             | minikube             | jenkins | v1.33.1 | 05 Aug 24 22:48 UTC | 05 Aug 24 22:48 UTC |
	| delete  | -p download-only-769780           | download-only-769780 | jenkins | v1.33.1 | 05 Aug 24 22:48 UTC | 05 Aug 24 22:48 UTC |
	| start   | -o=json --download-only           | download-only-068196 | jenkins | v1.33.1 | 05 Aug 24 22:48 UTC |                     |
	|         | -p download-only-068196           |                      |         |         |                     |                     |
	|         | --force --alsologtostderr         |                      |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.0-rc.0 |                      |         |         |                     |                     |
	|         | --container-runtime=crio          |                      |         |         |                     |                     |
	|         | --driver=kvm2                     |                      |         |         |                     |                     |
	|         | --container-runtime=crio          |                      |         |         |                     |                     |
	|---------|-----------------------------------|----------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/08/05 22:48:53
	Running on machine: ubuntu-20-agent-10
	Binary: Built with gc go1.22.5 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0805 22:48:53.357434   17414 out.go:291] Setting OutFile to fd 1 ...
	I0805 22:48:53.357683   17414 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0805 22:48:53.357691   17414 out.go:304] Setting ErrFile to fd 2...
	I0805 22:48:53.357696   17414 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0805 22:48:53.357879   17414 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19373-9606/.minikube/bin
	I0805 22:48:53.358425   17414 out.go:298] Setting JSON to true
	I0805 22:48:53.359291   17414 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-10","uptime":1879,"bootTime":1722896254,"procs":172,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1065-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0805 22:48:53.359349   17414 start.go:139] virtualization: kvm guest
	I0805 22:48:53.361435   17414 out.go:97] [download-only-068196] minikube v1.33.1 on Ubuntu 20.04 (kvm/amd64)
	I0805 22:48:53.361530   17414 notify.go:220] Checking for updates...
	I0805 22:48:53.362890   17414 out.go:169] MINIKUBE_LOCATION=19373
	I0805 22:48:53.364359   17414 out.go:169] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0805 22:48:53.365740   17414 out.go:169] KUBECONFIG=/home/jenkins/minikube-integration/19373-9606/kubeconfig
	I0805 22:48:53.367157   17414 out.go:169] MINIKUBE_HOME=/home/jenkins/minikube-integration/19373-9606/.minikube
	I0805 22:48:53.368523   17414 out.go:169] MINIKUBE_BIN=out/minikube-linux-amd64
	W0805 22:48:53.370845   17414 out.go:267] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I0805 22:48:53.371033   17414 driver.go:392] Setting default libvirt URI to qemu:///system
	I0805 22:48:53.401264   17414 out.go:97] Using the kvm2 driver based on user configuration
	I0805 22:48:53.401283   17414 start.go:297] selected driver: kvm2
	I0805 22:48:53.401288   17414 start.go:901] validating driver "kvm2" against <nil>
	I0805 22:48:53.401583   17414 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0805 22:48:53.401655   17414 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/19373-9606/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0805 22:48:53.415791   17414 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.33.1
	I0805 22:48:53.415839   17414 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0805 22:48:53.416304   17414 start_flags.go:393] Using suggested 6000MB memory alloc based on sys=32089MB, container=0MB
	I0805 22:48:53.416444   17414 start_flags.go:929] Wait components to verify : map[apiserver:true system_pods:true]
	I0805 22:48:53.416498   17414 cni.go:84] Creating CNI manager for ""
	I0805 22:48:53.416509   17414 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0805 22:48:53.416517   17414 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0805 22:48:53.416564   17414 start.go:340] cluster config:
	{Name:download-only-068196 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:6000 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.0-rc.0 ClusterName:download-only-068196 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local
ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.0-rc.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0805 22:48:53.416653   17414 iso.go:125] acquiring lock: {Name:mk54a637ed625e04bb2b6adf973b61c976cd6d35 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0805 22:48:53.418311   17414 out.go:97] Starting "download-only-068196" primary control-plane node in "download-only-068196" cluster
	I0805 22:48:53.418327   17414 preload.go:131] Checking if preload exists for k8s version v1.31.0-rc.0 and runtime crio
	I0805 22:48:53.529154   17414 preload.go:118] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.31.0-rc.0/preloaded-images-k8s-v18-v1.31.0-rc.0-cri-o-overlay-amd64.tar.lz4
	I0805 22:48:53.529204   17414 cache.go:56] Caching tarball of preloaded images
	I0805 22:48:53.529365   17414 preload.go:131] Checking if preload exists for k8s version v1.31.0-rc.0 and runtime crio
	I0805 22:48:53.531095   17414 out.go:97] Downloading Kubernetes v1.31.0-rc.0 preload ...
	I0805 22:48:53.531107   17414 preload.go:236] getting checksum for preloaded-images-k8s-v18-v1.31.0-rc.0-cri-o-overlay-amd64.tar.lz4 ...
	I0805 22:48:53.639894   17414 download.go:107] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.31.0-rc.0/preloaded-images-k8s-v18-v1.31.0-rc.0-cri-o-overlay-amd64.tar.lz4?checksum=md5:89b2d75682ccec9e5b50b57ad7b65741 -> /home/jenkins/minikube-integration/19373-9606/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-rc.0-cri-o-overlay-amd64.tar.lz4
	I0805 22:49:06.754720   17414 preload.go:247] saving checksum for preloaded-images-k8s-v18-v1.31.0-rc.0-cri-o-overlay-amd64.tar.lz4 ...
	I0805 22:49:06.754832   17414 preload.go:254] verifying checksum of /home/jenkins/minikube-integration/19373-9606/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-rc.0-cri-o-overlay-amd64.tar.lz4 ...
	I0805 22:49:07.501740   17414 cache.go:59] Finished verifying existence of preloaded tar for v1.31.0-rc.0 on crio
	I0805 22:49:07.502052   17414 profile.go:143] Saving config to /home/jenkins/minikube-integration/19373-9606/.minikube/profiles/download-only-068196/config.json ...
	I0805 22:49:07.502078   17414 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19373-9606/.minikube/profiles/download-only-068196/config.json: {Name:mk584eada87a8c4d386a36842ac62c799ba085e1 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0805 22:49:07.502220   17414 preload.go:131] Checking if preload exists for k8s version v1.31.0-rc.0 and runtime crio
	I0805 22:49:07.502358   17414 download.go:107] Downloading: https://dl.k8s.io/release/v1.31.0-rc.0/bin/linux/amd64/kubectl?checksum=file:https://dl.k8s.io/release/v1.31.0-rc.0/bin/linux/amd64/kubectl.sha256 -> /home/jenkins/minikube-integration/19373-9606/.minikube/cache/linux/amd64/v1.31.0-rc.0/kubectl
	
	
	* The control-plane node download-only-068196 host does not exist
	  To start a cluster, run: "minikube start -p download-only-068196"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:185: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.31.0-rc.0/LogsDuration (0.06s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.0-rc.0/DeleteAll (0.13s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.0-rc.0/DeleteAll
aaa_download_only_test.go:197: (dbg) Run:  out/minikube-linux-amd64 delete --all
--- PASS: TestDownloadOnly/v1.31.0-rc.0/DeleteAll (0.13s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.0-rc.0/DeleteAlwaysSucceeds (0.12s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.0-rc.0/DeleteAlwaysSucceeds
aaa_download_only_test.go:208: (dbg) Run:  out/minikube-linux-amd64 delete -p download-only-068196
--- PASS: TestDownloadOnly/v1.31.0-rc.0/DeleteAlwaysSucceeds (0.12s)

                                                
                                    
x
+
TestBinaryMirror (0.56s)

                                                
                                                
=== RUN   TestBinaryMirror
aaa_download_only_test.go:314: (dbg) Run:  out/minikube-linux-amd64 start --download-only -p binary-mirror-208535 --alsologtostderr --binary-mirror http://127.0.0.1:41223 --driver=kvm2  --container-runtime=crio
helpers_test.go:175: Cleaning up "binary-mirror-208535" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p binary-mirror-208535
--- PASS: TestBinaryMirror (0.56s)

                                                
                                    
x
+
TestOffline (100.69s)

                                                
                                                
=== RUN   TestOffline
=== PAUSE TestOffline

                                                
                                                

                                                
                                                
=== CONT  TestOffline
aab_offline_test.go:55: (dbg) Run:  out/minikube-linux-amd64 start -p offline-crio-820703 --alsologtostderr -v=1 --memory=2048 --wait=true --driver=kvm2  --container-runtime=crio
aab_offline_test.go:55: (dbg) Done: out/minikube-linux-amd64 start -p offline-crio-820703 --alsologtostderr -v=1 --memory=2048 --wait=true --driver=kvm2  --container-runtime=crio: (1m39.682454026s)
helpers_test.go:175: Cleaning up "offline-crio-820703" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p offline-crio-820703
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p offline-crio-820703: (1.010640863s)
--- PASS: TestOffline (100.69s)

                                                
                                    
x
+
TestAddons/PreSetup/EnablingAddonOnNonExistingCluster (0.05s)

                                                
                                                
=== RUN   TestAddons/PreSetup/EnablingAddonOnNonExistingCluster
=== PAUSE TestAddons/PreSetup/EnablingAddonOnNonExistingCluster

                                                
                                                

                                                
                                                
=== CONT  TestAddons/PreSetup/EnablingAddonOnNonExistingCluster
addons_test.go:1037: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p addons-435364
addons_test.go:1037: (dbg) Non-zero exit: out/minikube-linux-amd64 addons enable dashboard -p addons-435364: exit status 85 (48.291193ms)

                                                
                                                
-- stdout --
	* Profile "addons-435364" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p addons-435364"

                                                
                                                
-- /stdout --
--- PASS: TestAddons/PreSetup/EnablingAddonOnNonExistingCluster (0.05s)

                                                
                                    
x
+
TestAddons/PreSetup/DisablingAddonOnNonExistingCluster (0.05s)

                                                
                                                
=== RUN   TestAddons/PreSetup/DisablingAddonOnNonExistingCluster
=== PAUSE TestAddons/PreSetup/DisablingAddonOnNonExistingCluster

                                                
                                                

                                                
                                                
=== CONT  TestAddons/PreSetup/DisablingAddonOnNonExistingCluster
addons_test.go:1048: (dbg) Run:  out/minikube-linux-amd64 addons disable dashboard -p addons-435364
addons_test.go:1048: (dbg) Non-zero exit: out/minikube-linux-amd64 addons disable dashboard -p addons-435364: exit status 85 (45.695259ms)

                                                
                                                
-- stdout --
	* Profile "addons-435364" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p addons-435364"

                                                
                                                
-- /stdout --
--- PASS: TestAddons/PreSetup/DisablingAddonOnNonExistingCluster (0.05s)

                                                
                                    
x
+
TestAddons/Setup (207.49s)

                                                
                                                
=== RUN   TestAddons/Setup
addons_test.go:110: (dbg) Run:  out/minikube-linux-amd64 start -p addons-435364 --wait=true --memory=4000 --alsologtostderr --addons=registry --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=storage-provisioner-rancher --addons=nvidia-device-plugin --addons=yakd --addons=volcano --driver=kvm2  --container-runtime=crio --addons=ingress --addons=ingress-dns --addons=helm-tiller
addons_test.go:110: (dbg) Done: out/minikube-linux-amd64 start -p addons-435364 --wait=true --memory=4000 --alsologtostderr --addons=registry --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=storage-provisioner-rancher --addons=nvidia-device-plugin --addons=yakd --addons=volcano --driver=kvm2  --container-runtime=crio --addons=ingress --addons=ingress-dns --addons=helm-tiller: (3m27.487711056s)
--- PASS: TestAddons/Setup (207.49s)

                                                
                                    
x
+
TestAddons/serial/GCPAuth/Namespaces (0.15s)

                                                
                                                
=== RUN   TestAddons/serial/GCPAuth/Namespaces
addons_test.go:656: (dbg) Run:  kubectl --context addons-435364 create ns new-namespace
addons_test.go:670: (dbg) Run:  kubectl --context addons-435364 get secret gcp-auth -n new-namespace
--- PASS: TestAddons/serial/GCPAuth/Namespaces (0.15s)

                                                
                                    
x
+
TestAddons/parallel/Registry (17.5s)

                                                
                                                
=== RUN   TestAddons/parallel/Registry
=== PAUSE TestAddons/parallel/Registry

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Registry
addons_test.go:332: registry stabilized in 4.96812ms
addons_test.go:334: (dbg) TestAddons/parallel/Registry: waiting 6m0s for pods matching "actual-registry=true" in namespace "kube-system" ...
helpers_test.go:344: "registry-698f998955-4stmn" [c0716044-6d96-44a5-ab8d-03023e2da298] Running
addons_test.go:334: (dbg) TestAddons/parallel/Registry: actual-registry=true healthy within 6.005233266s
addons_test.go:337: (dbg) TestAddons/parallel/Registry: waiting 10m0s for pods matching "registry-proxy=true" in namespace "kube-system" ...
helpers_test.go:344: "registry-proxy-2dplh" [a8ad0955-3945-41ac-a7b2-78bf1d724a1a] Running
addons_test.go:337: (dbg) TestAddons/parallel/Registry: registry-proxy=true healthy within 5.004935044s
addons_test.go:342: (dbg) Run:  kubectl --context addons-435364 delete po -l run=registry-test --now
addons_test.go:347: (dbg) Run:  kubectl --context addons-435364 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local"
addons_test.go:347: (dbg) Done: kubectl --context addons-435364 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local": (5.690153556s)
addons_test.go:361: (dbg) Run:  out/minikube-linux-amd64 -p addons-435364 ip
2024/08/05 22:53:52 [DEBUG] GET http://192.168.39.129:5000
addons_test.go:390: (dbg) Run:  out/minikube-linux-amd64 -p addons-435364 addons disable registry --alsologtostderr -v=1
--- PASS: TestAddons/parallel/Registry (17.50s)

                                                
                                    
x
+
TestAddons/parallel/InspektorGadget (11.16s)

                                                
                                                
=== RUN   TestAddons/parallel/InspektorGadget
=== PAUSE TestAddons/parallel/InspektorGadget

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/InspektorGadget
addons_test.go:848: (dbg) TestAddons/parallel/InspektorGadget: waiting 8m0s for pods matching "k8s-app=gadget" in namespace "gadget" ...
helpers_test.go:344: "gadget-2x64m" [534510f3-5541-4591-8759-4758ac8b340d] Running / Ready:ContainersNotReady (containers with unready status: [gadget]) / ContainersReady:ContainersNotReady (containers with unready status: [gadget])
addons_test.go:848: (dbg) TestAddons/parallel/InspektorGadget: k8s-app=gadget healthy within 5.007313569s
addons_test.go:851: (dbg) Run:  out/minikube-linux-amd64 addons disable inspektor-gadget -p addons-435364
addons_test.go:851: (dbg) Done: out/minikube-linux-amd64 addons disable inspektor-gadget -p addons-435364: (6.15432595s)
--- PASS: TestAddons/parallel/InspektorGadget (11.16s)

                                                
                                    
x
+
TestAddons/parallel/HelmTiller (15.2s)

                                                
                                                
=== RUN   TestAddons/parallel/HelmTiller
=== PAUSE TestAddons/parallel/HelmTiller

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/HelmTiller
addons_test.go:458: tiller-deploy stabilized in 3.014801ms
addons_test.go:460: (dbg) TestAddons/parallel/HelmTiller: waiting 6m0s for pods matching "app=helm" in namespace "kube-system" ...
helpers_test.go:344: "tiller-deploy-6677d64bcd-qn6ln" [4188df06-7e5f-4218-bf0f-658f8c51bfb9] Running
addons_test.go:460: (dbg) TestAddons/parallel/HelmTiller: app=helm healthy within 5.023992373s
addons_test.go:475: (dbg) Run:  kubectl --context addons-435364 run --rm helm-test --restart=Never --image=docker.io/alpine/helm:2.16.3 -it --namespace=kube-system -- version
addons_test.go:475: (dbg) Done: kubectl --context addons-435364 run --rm helm-test --restart=Never --image=docker.io/alpine/helm:2.16.3 -it --namespace=kube-system -- version: (9.54457307s)
addons_test.go:492: (dbg) Run:  out/minikube-linux-amd64 -p addons-435364 addons disable helm-tiller --alsologtostderr -v=1
--- PASS: TestAddons/parallel/HelmTiller (15.20s)

                                                
                                    
x
+
TestAddons/parallel/CSI (69.81s)

                                                
                                                
=== RUN   TestAddons/parallel/CSI
=== PAUSE TestAddons/parallel/CSI

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/CSI
addons_test.go:567: csi-hostpath-driver pods stabilized in 8.140294ms
addons_test.go:570: (dbg) Run:  kubectl --context addons-435364 create -f testdata/csi-hostpath-driver/pvc.yaml
addons_test.go:575: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc" in namespace "default" ...
helpers_test.go:394: (dbg) Run:  kubectl --context addons-435364 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-435364 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-435364 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-435364 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-435364 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-435364 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-435364 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-435364 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-435364 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-435364 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-435364 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-435364 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-435364 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-435364 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-435364 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-435364 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-435364 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-435364 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-435364 get pvc hpvc -o jsonpath={.status.phase} -n default
addons_test.go:580: (dbg) Run:  kubectl --context addons-435364 create -f testdata/csi-hostpath-driver/pv-pod.yaml
addons_test.go:585: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod" in namespace "default" ...
helpers_test.go:344: "task-pv-pod" [ffddcae2-5e05-4a0f-b0a9-d2925b3e0f80] Pending
helpers_test.go:344: "task-pv-pod" [ffddcae2-5e05-4a0f-b0a9-d2925b3e0f80] Pending / Ready:ContainersNotReady (containers with unready status: [task-pv-container]) / ContainersReady:ContainersNotReady (containers with unready status: [task-pv-container])
helpers_test.go:344: "task-pv-pod" [ffddcae2-5e05-4a0f-b0a9-d2925b3e0f80] Running
addons_test.go:585: (dbg) TestAddons/parallel/CSI: app=task-pv-pod healthy within 16.003783902s
addons_test.go:590: (dbg) Run:  kubectl --context addons-435364 create -f testdata/csi-hostpath-driver/snapshot.yaml
addons_test.go:595: (dbg) TestAddons/parallel/CSI: waiting 6m0s for volume snapshot "new-snapshot-demo" in namespace "default" ...
helpers_test.go:419: (dbg) Run:  kubectl --context addons-435364 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
helpers_test.go:427: TestAddons/parallel/CSI: WARNING: volume snapshot get for "default" "new-snapshot-demo" returned: 
helpers_test.go:419: (dbg) Run:  kubectl --context addons-435364 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
addons_test.go:600: (dbg) Run:  kubectl --context addons-435364 delete pod task-pv-pod
addons_test.go:606: (dbg) Run:  kubectl --context addons-435364 delete pvc hpvc
addons_test.go:612: (dbg) Run:  kubectl --context addons-435364 create -f testdata/csi-hostpath-driver/pvc-restore.yaml
addons_test.go:617: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc-restore" in namespace "default" ...
helpers_test.go:394: (dbg) Run:  kubectl --context addons-435364 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-435364 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-435364 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-435364 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-435364 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-435364 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-435364 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-435364 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-435364 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-435364 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-435364 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-435364 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-435364 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-435364 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-435364 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-435364 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-435364 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
addons_test.go:622: (dbg) Run:  kubectl --context addons-435364 create -f testdata/csi-hostpath-driver/pv-pod-restore.yaml
addons_test.go:627: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod-restore" in namespace "default" ...
helpers_test.go:344: "task-pv-pod-restore" [51ea3acb-e95d-4a6d-98b5-da9d2f30a7bf] Pending
helpers_test.go:344: "task-pv-pod-restore" [51ea3acb-e95d-4a6d-98b5-da9d2f30a7bf] Pending / Ready:ContainersNotReady (containers with unready status: [task-pv-container]) / ContainersReady:ContainersNotReady (containers with unready status: [task-pv-container])
helpers_test.go:344: "task-pv-pod-restore" [51ea3acb-e95d-4a6d-98b5-da9d2f30a7bf] Running
addons_test.go:627: (dbg) TestAddons/parallel/CSI: app=task-pv-pod-restore healthy within 8.003795127s
addons_test.go:632: (dbg) Run:  kubectl --context addons-435364 delete pod task-pv-pod-restore
addons_test.go:632: (dbg) Done: kubectl --context addons-435364 delete pod task-pv-pod-restore: (1.072693227s)
addons_test.go:636: (dbg) Run:  kubectl --context addons-435364 delete pvc hpvc-restore
addons_test.go:640: (dbg) Run:  kubectl --context addons-435364 delete volumesnapshot new-snapshot-demo
addons_test.go:644: (dbg) Run:  out/minikube-linux-amd64 -p addons-435364 addons disable csi-hostpath-driver --alsologtostderr -v=1
addons_test.go:644: (dbg) Done: out/minikube-linux-amd64 -p addons-435364 addons disable csi-hostpath-driver --alsologtostderr -v=1: (7.053600768s)
addons_test.go:648: (dbg) Run:  out/minikube-linux-amd64 -p addons-435364 addons disable volumesnapshots --alsologtostderr -v=1
--- PASS: TestAddons/parallel/CSI (69.81s)

                                                
                                    
x
+
TestAddons/parallel/Headlamp (19.84s)

                                                
                                                
=== RUN   TestAddons/parallel/Headlamp
=== PAUSE TestAddons/parallel/Headlamp

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Headlamp
addons_test.go:830: (dbg) Run:  out/minikube-linux-amd64 addons enable headlamp -p addons-435364 --alsologtostderr -v=1
addons_test.go:835: (dbg) TestAddons/parallel/Headlamp: waiting 8m0s for pods matching "app.kubernetes.io/name=headlamp" in namespace "headlamp" ...
helpers_test.go:344: "headlamp-9d868696f-w4ncx" [8d353723-5ceb-46ce-8829-c15fde070d4d] Pending
helpers_test.go:344: "headlamp-9d868696f-w4ncx" [8d353723-5ceb-46ce-8829-c15fde070d4d] Pending / Ready:ContainersNotReady (containers with unready status: [headlamp]) / ContainersReady:ContainersNotReady (containers with unready status: [headlamp])
helpers_test.go:344: "headlamp-9d868696f-w4ncx" [8d353723-5ceb-46ce-8829-c15fde070d4d] Running
addons_test.go:835: (dbg) TestAddons/parallel/Headlamp: app.kubernetes.io/name=headlamp healthy within 13.015363926s
addons_test.go:839: (dbg) Run:  out/minikube-linux-amd64 -p addons-435364 addons disable headlamp --alsologtostderr -v=1
addons_test.go:839: (dbg) Done: out/minikube-linux-amd64 -p addons-435364 addons disable headlamp --alsologtostderr -v=1: (5.861770034s)
--- PASS: TestAddons/parallel/Headlamp (19.84s)

                                                
                                    
x
+
TestAddons/parallel/CloudSpanner (6.54s)

                                                
                                                
=== RUN   TestAddons/parallel/CloudSpanner
=== PAUSE TestAddons/parallel/CloudSpanner

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/CloudSpanner
addons_test.go:867: (dbg) TestAddons/parallel/CloudSpanner: waiting 6m0s for pods matching "app=cloud-spanner-emulator" in namespace "default" ...
helpers_test.go:344: "cloud-spanner-emulator-5455fb9b69-hw9p8" [13a866bd-6e1f-4a4a-bda2-8c87c908f3cd] Running
addons_test.go:867: (dbg) TestAddons/parallel/CloudSpanner: app=cloud-spanner-emulator healthy within 6.004156191s
addons_test.go:870: (dbg) Run:  out/minikube-linux-amd64 addons disable cloud-spanner -p addons-435364
--- PASS: TestAddons/parallel/CloudSpanner (6.54s)

                                                
                                    
x
+
TestAddons/parallel/LocalPath (56.11s)

                                                
                                                
=== RUN   TestAddons/parallel/LocalPath
=== PAUSE TestAddons/parallel/LocalPath

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/LocalPath
addons_test.go:982: (dbg) Run:  kubectl --context addons-435364 apply -f testdata/storage-provisioner-rancher/pvc.yaml
addons_test.go:988: (dbg) Run:  kubectl --context addons-435364 apply -f testdata/storage-provisioner-rancher/pod.yaml
addons_test.go:992: (dbg) TestAddons/parallel/LocalPath: waiting 5m0s for pvc "test-pvc" in namespace "default" ...
helpers_test.go:394: (dbg) Run:  kubectl --context addons-435364 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-435364 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-435364 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-435364 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-435364 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-435364 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-435364 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-435364 get pvc test-pvc -o jsonpath={.status.phase} -n default
addons_test.go:995: (dbg) TestAddons/parallel/LocalPath: waiting 3m0s for pods matching "run=test-local-path" in namespace "default" ...
helpers_test.go:344: "test-local-path" [381de454-3a58-4bbc-ae6b-e7ec2e0293e2] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "test-local-path" [381de454-3a58-4bbc-ae6b-e7ec2e0293e2] Pending / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
helpers_test.go:344: "test-local-path" [381de454-3a58-4bbc-ae6b-e7ec2e0293e2] Succeeded / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
addons_test.go:995: (dbg) TestAddons/parallel/LocalPath: run=test-local-path healthy within 5.003897348s
addons_test.go:1000: (dbg) Run:  kubectl --context addons-435364 get pvc test-pvc -o=json
addons_test.go:1009: (dbg) Run:  out/minikube-linux-amd64 -p addons-435364 ssh "cat /opt/local-path-provisioner/pvc-df517976-b98a-4ba5-bb26-cc04d40ee4f9_default_test-pvc/file1"
addons_test.go:1021: (dbg) Run:  kubectl --context addons-435364 delete pod test-local-path
addons_test.go:1025: (dbg) Run:  kubectl --context addons-435364 delete pvc test-pvc
addons_test.go:1029: (dbg) Run:  out/minikube-linux-amd64 -p addons-435364 addons disable storage-provisioner-rancher --alsologtostderr -v=1
addons_test.go:1029: (dbg) Done: out/minikube-linux-amd64 -p addons-435364 addons disable storage-provisioner-rancher --alsologtostderr -v=1: (43.335434682s)
--- PASS: TestAddons/parallel/LocalPath (56.11s)

                                                
                                    
x
+
TestAddons/parallel/NvidiaDevicePlugin (5.54s)

                                                
                                                
=== RUN   TestAddons/parallel/NvidiaDevicePlugin
=== PAUSE TestAddons/parallel/NvidiaDevicePlugin

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/NvidiaDevicePlugin
addons_test.go:1061: (dbg) TestAddons/parallel/NvidiaDevicePlugin: waiting 6m0s for pods matching "name=nvidia-device-plugin-ds" in namespace "kube-system" ...
helpers_test.go:344: "nvidia-device-plugin-daemonset-jk9q5" [1a23f5f9-2fc4-453c-9381-177bf606032d] Running
addons_test.go:1061: (dbg) TestAddons/parallel/NvidiaDevicePlugin: name=nvidia-device-plugin-ds healthy within 5.005060179s
addons_test.go:1064: (dbg) Run:  out/minikube-linux-amd64 addons disable nvidia-device-plugin -p addons-435364
--- PASS: TestAddons/parallel/NvidiaDevicePlugin (5.54s)

                                                
                                    
x
+
TestAddons/parallel/Yakd (10.76s)

                                                
                                                
=== RUN   TestAddons/parallel/Yakd
=== PAUSE TestAddons/parallel/Yakd

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Yakd
addons_test.go:1072: (dbg) TestAddons/parallel/Yakd: waiting 2m0s for pods matching "app.kubernetes.io/name=yakd-dashboard" in namespace "yakd-dashboard" ...
helpers_test.go:344: "yakd-dashboard-799879c74f-2d7j2" [6bbd39b7-b1ed-45d6-80c9-c596175008ea] Running
addons_test.go:1072: (dbg) TestAddons/parallel/Yakd: app.kubernetes.io/name=yakd-dashboard healthy within 5.005095949s
addons_test.go:1076: (dbg) Run:  out/minikube-linux-amd64 -p addons-435364 addons disable yakd --alsologtostderr -v=1
addons_test.go:1076: (dbg) Done: out/minikube-linux-amd64 -p addons-435364 addons disable yakd --alsologtostderr -v=1: (5.757743598s)
--- PASS: TestAddons/parallel/Yakd (10.76s)

                                                
                                    
x
+
TestCertOptions (48.55s)

                                                
                                                
=== RUN   TestCertOptions
=== PAUSE TestCertOptions

                                                
                                                

                                                
                                                
=== CONT  TestCertOptions
cert_options_test.go:49: (dbg) Run:  out/minikube-linux-amd64 start -p cert-options-323157 --memory=2048 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=kvm2  --container-runtime=crio
cert_options_test.go:49: (dbg) Done: out/minikube-linux-amd64 start -p cert-options-323157 --memory=2048 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=kvm2  --container-runtime=crio: (47.131548026s)
cert_options_test.go:60: (dbg) Run:  out/minikube-linux-amd64 -p cert-options-323157 ssh "openssl x509 -text -noout -in /var/lib/minikube/certs/apiserver.crt"
cert_options_test.go:88: (dbg) Run:  kubectl --context cert-options-323157 config view
cert_options_test.go:100: (dbg) Run:  out/minikube-linux-amd64 ssh -p cert-options-323157 -- "sudo cat /etc/kubernetes/admin.conf"
helpers_test.go:175: Cleaning up "cert-options-323157" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p cert-options-323157
--- PASS: TestCertOptions (48.55s)

                                                
                                    
x
+
TestCertExpiration (411.16s)

                                                
                                                
=== RUN   TestCertExpiration
=== PAUSE TestCertExpiration

                                                
                                                

                                                
                                                
=== CONT  TestCertExpiration
cert_options_test.go:123: (dbg) Run:  out/minikube-linux-amd64 start -p cert-expiration-272169 --memory=2048 --cert-expiration=3m --driver=kvm2  --container-runtime=crio
cert_options_test.go:123: (dbg) Done: out/minikube-linux-amd64 start -p cert-expiration-272169 --memory=2048 --cert-expiration=3m --driver=kvm2  --container-runtime=crio: (1m25.081802162s)
cert_options_test.go:131: (dbg) Run:  out/minikube-linux-amd64 start -p cert-expiration-272169 --memory=2048 --cert-expiration=8760h --driver=kvm2  --container-runtime=crio
cert_options_test.go:131: (dbg) Done: out/minikube-linux-amd64 start -p cert-expiration-272169 --memory=2048 --cert-expiration=8760h --driver=kvm2  --container-runtime=crio: (2m25.286119358s)
helpers_test.go:175: Cleaning up "cert-expiration-272169" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p cert-expiration-272169
--- PASS: TestCertExpiration (411.16s)

                                                
                                    
x
+
TestForceSystemdFlag (61.36s)

                                                
                                                
=== RUN   TestForceSystemdFlag
=== PAUSE TestForceSystemdFlag

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdFlag
docker_test.go:91: (dbg) Run:  out/minikube-linux-amd64 start -p force-systemd-flag-936727 --memory=2048 --force-systemd --alsologtostderr -v=5 --driver=kvm2  --container-runtime=crio
docker_test.go:91: (dbg) Done: out/minikube-linux-amd64 start -p force-systemd-flag-936727 --memory=2048 --force-systemd --alsologtostderr -v=5 --driver=kvm2  --container-runtime=crio: (1m0.393918202s)
docker_test.go:132: (dbg) Run:  out/minikube-linux-amd64 -p force-systemd-flag-936727 ssh "cat /etc/crio/crio.conf.d/02-crio.conf"
helpers_test.go:175: Cleaning up "force-systemd-flag-936727" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p force-systemd-flag-936727
--- PASS: TestForceSystemdFlag (61.36s)

                                                
                                    
x
+
TestForceSystemdEnv (86.33s)

                                                
                                                
=== RUN   TestForceSystemdEnv
=== PAUSE TestForceSystemdEnv

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdEnv
docker_test.go:155: (dbg) Run:  out/minikube-linux-amd64 start -p force-systemd-env-571298 --memory=2048 --alsologtostderr -v=5 --driver=kvm2  --container-runtime=crio
docker_test.go:155: (dbg) Done: out/minikube-linux-amd64 start -p force-systemd-env-571298 --memory=2048 --alsologtostderr -v=5 --driver=kvm2  --container-runtime=crio: (1m25.30039704s)
helpers_test.go:175: Cleaning up "force-systemd-env-571298" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p force-systemd-env-571298
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p force-systemd-env-571298: (1.026688245s)
--- PASS: TestForceSystemdEnv (86.33s)

                                                
                                    
x
+
TestKVMDriverInstallOrUpdate (4.69s)

                                                
                                                
=== RUN   TestKVMDriverInstallOrUpdate
=== PAUSE TestKVMDriverInstallOrUpdate

                                                
                                                

                                                
                                                
=== CONT  TestKVMDriverInstallOrUpdate
--- PASS: TestKVMDriverInstallOrUpdate (4.69s)

                                                
                                    
x
+
TestErrorSpam/setup (45.92s)

                                                
                                                
=== RUN   TestErrorSpam/setup
error_spam_test.go:81: (dbg) Run:  out/minikube-linux-amd64 start -p nospam-012121 -n=1 --memory=2250 --wait=false --log_dir=/tmp/nospam-012121 --driver=kvm2  --container-runtime=crio
error_spam_test.go:81: (dbg) Done: out/minikube-linux-amd64 start -p nospam-012121 -n=1 --memory=2250 --wait=false --log_dir=/tmp/nospam-012121 --driver=kvm2  --container-runtime=crio: (45.916435718s)
--- PASS: TestErrorSpam/setup (45.92s)

                                                
                                    
x
+
TestErrorSpam/start (0.33s)

                                                
                                                
=== RUN   TestErrorSpam/start
error_spam_test.go:216: Cleaning up 1 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-012121 --log_dir /tmp/nospam-012121 start --dry-run
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-012121 --log_dir /tmp/nospam-012121 start --dry-run
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-amd64 -p nospam-012121 --log_dir /tmp/nospam-012121 start --dry-run
--- PASS: TestErrorSpam/start (0.33s)

                                                
                                    
x
+
TestErrorSpam/status (0.73s)

                                                
                                                
=== RUN   TestErrorSpam/status
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-012121 --log_dir /tmp/nospam-012121 status
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-012121 --log_dir /tmp/nospam-012121 status
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-amd64 -p nospam-012121 --log_dir /tmp/nospam-012121 status
--- PASS: TestErrorSpam/status (0.73s)

                                                
                                    
x
+
TestErrorSpam/pause (1.6s)

                                                
                                                
=== RUN   TestErrorSpam/pause
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-012121 --log_dir /tmp/nospam-012121 pause
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-012121 --log_dir /tmp/nospam-012121 pause
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-amd64 -p nospam-012121 --log_dir /tmp/nospam-012121 pause
--- PASS: TestErrorSpam/pause (1.60s)

                                                
                                    
x
+
TestErrorSpam/unpause (1.64s)

                                                
                                                
=== RUN   TestErrorSpam/unpause
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-012121 --log_dir /tmp/nospam-012121 unpause
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-012121 --log_dir /tmp/nospam-012121 unpause
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-amd64 -p nospam-012121 --log_dir /tmp/nospam-012121 unpause
--- PASS: TestErrorSpam/unpause (1.64s)

                                                
                                    
x
+
TestErrorSpam/stop (5.43s)

                                                
                                                
=== RUN   TestErrorSpam/stop
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-012121 --log_dir /tmp/nospam-012121 stop
error_spam_test.go:159: (dbg) Done: out/minikube-linux-amd64 -p nospam-012121 --log_dir /tmp/nospam-012121 stop: (2.290746948s)
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-012121 --log_dir /tmp/nospam-012121 stop
error_spam_test.go:159: (dbg) Done: out/minikube-linux-amd64 -p nospam-012121 --log_dir /tmp/nospam-012121 stop: (1.16975871s)
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-amd64 -p nospam-012121 --log_dir /tmp/nospam-012121 stop
error_spam_test.go:182: (dbg) Done: out/minikube-linux-amd64 -p nospam-012121 --log_dir /tmp/nospam-012121 stop: (1.969616606s)
--- PASS: TestErrorSpam/stop (5.43s)

                                                
                                    
x
+
TestFunctional/serial/CopySyncFile (0s)

                                                
                                                
=== RUN   TestFunctional/serial/CopySyncFile
functional_test.go:1851: local sync path: /home/jenkins/minikube-integration/19373-9606/.minikube/files/etc/test/nested/copy/16792/hosts
--- PASS: TestFunctional/serial/CopySyncFile (0.00s)

                                                
                                    
x
+
TestFunctional/serial/StartWithProxy (98.25s)

                                                
                                                
=== RUN   TestFunctional/serial/StartWithProxy
functional_test.go:2230: (dbg) Run:  out/minikube-linux-amd64 start -p functional-299463 --memory=4000 --apiserver-port=8441 --wait=all --driver=kvm2  --container-runtime=crio
E0805 23:03:16.352137   16792 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19373-9606/.minikube/profiles/addons-435364/client.crt: no such file or directory
E0805 23:03:16.357805   16792 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19373-9606/.minikube/profiles/addons-435364/client.crt: no such file or directory
E0805 23:03:16.368121   16792 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19373-9606/.minikube/profiles/addons-435364/client.crt: no such file or directory
E0805 23:03:16.388462   16792 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19373-9606/.minikube/profiles/addons-435364/client.crt: no such file or directory
E0805 23:03:16.428726   16792 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19373-9606/.minikube/profiles/addons-435364/client.crt: no such file or directory
E0805 23:03:16.509038   16792 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19373-9606/.minikube/profiles/addons-435364/client.crt: no such file or directory
E0805 23:03:16.669500   16792 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19373-9606/.minikube/profiles/addons-435364/client.crt: no such file or directory
E0805 23:03:16.990102   16792 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19373-9606/.minikube/profiles/addons-435364/client.crt: no such file or directory
E0805 23:03:17.631027   16792 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19373-9606/.minikube/profiles/addons-435364/client.crt: no such file or directory
E0805 23:03:18.911319   16792 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19373-9606/.minikube/profiles/addons-435364/client.crt: no such file or directory
E0805 23:03:21.473124   16792 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19373-9606/.minikube/profiles/addons-435364/client.crt: no such file or directory
E0805 23:03:26.593795   16792 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19373-9606/.minikube/profiles/addons-435364/client.crt: no such file or directory
E0805 23:03:36.834105   16792 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19373-9606/.minikube/profiles/addons-435364/client.crt: no such file or directory
E0805 23:03:57.315138   16792 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19373-9606/.minikube/profiles/addons-435364/client.crt: no such file or directory
E0805 23:04:38.276105   16792 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19373-9606/.minikube/profiles/addons-435364/client.crt: no such file or directory
functional_test.go:2230: (dbg) Done: out/minikube-linux-amd64 start -p functional-299463 --memory=4000 --apiserver-port=8441 --wait=all --driver=kvm2  --container-runtime=crio: (1m38.247010026s)
--- PASS: TestFunctional/serial/StartWithProxy (98.25s)

                                                
                                    
x
+
TestFunctional/serial/AuditLog (0s)

                                                
                                                
=== RUN   TestFunctional/serial/AuditLog
--- PASS: TestFunctional/serial/AuditLog (0.00s)

                                                
                                    
x
+
TestFunctional/serial/SoftStart (42.41s)

                                                
                                                
=== RUN   TestFunctional/serial/SoftStart
functional_test.go:655: (dbg) Run:  out/minikube-linux-amd64 start -p functional-299463 --alsologtostderr -v=8
functional_test.go:655: (dbg) Done: out/minikube-linux-amd64 start -p functional-299463 --alsologtostderr -v=8: (42.412709165s)
functional_test.go:659: soft start took 42.413255321s for "functional-299463" cluster.
--- PASS: TestFunctional/serial/SoftStart (42.41s)

                                                
                                    
x
+
TestFunctional/serial/KubeContext (0.04s)

                                                
                                                
=== RUN   TestFunctional/serial/KubeContext
functional_test.go:677: (dbg) Run:  kubectl config current-context
--- PASS: TestFunctional/serial/KubeContext (0.04s)

                                                
                                    
x
+
TestFunctional/serial/KubectlGetPods (0.08s)

                                                
                                                
=== RUN   TestFunctional/serial/KubectlGetPods
functional_test.go:692: (dbg) Run:  kubectl --context functional-299463 get po -A
--- PASS: TestFunctional/serial/KubectlGetPods (0.08s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_remote (3.03s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_remote
functional_test.go:1045: (dbg) Run:  out/minikube-linux-amd64 -p functional-299463 cache add registry.k8s.io/pause:3.1
functional_test.go:1045: (dbg) Run:  out/minikube-linux-amd64 -p functional-299463 cache add registry.k8s.io/pause:3.3
functional_test.go:1045: (dbg) Done: out/minikube-linux-amd64 -p functional-299463 cache add registry.k8s.io/pause:3.3: (1.087011387s)
functional_test.go:1045: (dbg) Run:  out/minikube-linux-amd64 -p functional-299463 cache add registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/add_remote (3.03s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_local (2.22s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_local
functional_test.go:1073: (dbg) Run:  docker build -t minikube-local-cache-test:functional-299463 /tmp/TestFunctionalserialCacheCmdcacheadd_local1718125277/001
functional_test.go:1085: (dbg) Run:  out/minikube-linux-amd64 -p functional-299463 cache add minikube-local-cache-test:functional-299463
functional_test.go:1085: (dbg) Done: out/minikube-linux-amd64 -p functional-299463 cache add minikube-local-cache-test:functional-299463: (1.873028127s)
functional_test.go:1090: (dbg) Run:  out/minikube-linux-amd64 -p functional-299463 cache delete minikube-local-cache-test:functional-299463
functional_test.go:1079: (dbg) Run:  docker rmi minikube-local-cache-test:functional-299463
--- PASS: TestFunctional/serial/CacheCmd/cache/add_local (2.22s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/CacheDelete (0.04s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/CacheDelete
functional_test.go:1098: (dbg) Run:  out/minikube-linux-amd64 cache delete registry.k8s.io/pause:3.3
--- PASS: TestFunctional/serial/CacheCmd/cache/CacheDelete (0.04s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/list (0.05s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/list
functional_test.go:1106: (dbg) Run:  out/minikube-linux-amd64 cache list
--- PASS: TestFunctional/serial/CacheCmd/cache/list (0.05s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.22s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node
functional_test.go:1120: (dbg) Run:  out/minikube-linux-amd64 -p functional-299463 ssh sudo crictl images
--- PASS: TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.22s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/cache_reload (1.63s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/cache_reload
functional_test.go:1143: (dbg) Run:  out/minikube-linux-amd64 -p functional-299463 ssh sudo crictl rmi registry.k8s.io/pause:latest
functional_test.go:1149: (dbg) Run:  out/minikube-linux-amd64 -p functional-299463 ssh sudo crictl inspecti registry.k8s.io/pause:latest
functional_test.go:1149: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-299463 ssh sudo crictl inspecti registry.k8s.io/pause:latest: exit status 1 (210.52761ms)

                                                
                                                
-- stdout --
	FATA[0000] no such image "registry.k8s.io/pause:latest" present 

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:1154: (dbg) Run:  out/minikube-linux-amd64 -p functional-299463 cache reload
functional_test.go:1159: (dbg) Run:  out/minikube-linux-amd64 -p functional-299463 ssh sudo crictl inspecti registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/cache_reload (1.63s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/delete (0.09s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/delete
functional_test.go:1168: (dbg) Run:  out/minikube-linux-amd64 cache delete registry.k8s.io/pause:3.1
functional_test.go:1168: (dbg) Run:  out/minikube-linux-amd64 cache delete registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/delete (0.09s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmd (0.1s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmd
functional_test.go:712: (dbg) Run:  out/minikube-linux-amd64 -p functional-299463 kubectl -- --context functional-299463 get pods
--- PASS: TestFunctional/serial/MinikubeKubectlCmd (0.10s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmdDirectly (0.1s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmdDirectly
functional_test.go:737: (dbg) Run:  out/kubectl --context functional-299463 get pods
--- PASS: TestFunctional/serial/MinikubeKubectlCmdDirectly (0.10s)

                                                
                                    
x
+
TestFunctional/serial/ExtraConfig (60.05s)

                                                
                                                
=== RUN   TestFunctional/serial/ExtraConfig
functional_test.go:753: (dbg) Run:  out/minikube-linux-amd64 start -p functional-299463 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all
E0805 23:06:00.197804   16792 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19373-9606/.minikube/profiles/addons-435364/client.crt: no such file or directory
functional_test.go:753: (dbg) Done: out/minikube-linux-amd64 start -p functional-299463 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all: (1m0.051638809s)
functional_test.go:757: restart took 1m0.051791452s for "functional-299463" cluster.
--- PASS: TestFunctional/serial/ExtraConfig (60.05s)

                                                
                                    
x
+
TestFunctional/serial/ComponentHealth (0.06s)

                                                
                                                
=== RUN   TestFunctional/serial/ComponentHealth
functional_test.go:806: (dbg) Run:  kubectl --context functional-299463 get po -l tier=control-plane -n kube-system -o=json
functional_test.go:821: etcd phase: Running
functional_test.go:831: etcd status: Ready
functional_test.go:821: kube-apiserver phase: Running
functional_test.go:831: kube-apiserver status: Ready
functional_test.go:821: kube-controller-manager phase: Running
functional_test.go:831: kube-controller-manager status: Ready
functional_test.go:821: kube-scheduler phase: Running
functional_test.go:831: kube-scheduler status: Ready
--- PASS: TestFunctional/serial/ComponentHealth (0.06s)

                                                
                                    
x
+
TestFunctional/serial/LogsCmd (1.47s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsCmd
functional_test.go:1232: (dbg) Run:  out/minikube-linux-amd64 -p functional-299463 logs
functional_test.go:1232: (dbg) Done: out/minikube-linux-amd64 -p functional-299463 logs: (1.467361917s)
--- PASS: TestFunctional/serial/LogsCmd (1.47s)

                                                
                                    
x
+
TestFunctional/serial/LogsFileCmd (1.58s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsFileCmd
functional_test.go:1246: (dbg) Run:  out/minikube-linux-amd64 -p functional-299463 logs --file /tmp/TestFunctionalserialLogsFileCmd1083890390/001/logs.txt
functional_test.go:1246: (dbg) Done: out/minikube-linux-amd64 -p functional-299463 logs --file /tmp/TestFunctionalserialLogsFileCmd1083890390/001/logs.txt: (1.579134049s)
--- PASS: TestFunctional/serial/LogsFileCmd (1.58s)

                                                
                                    
x
+
TestFunctional/serial/InvalidService (3.56s)

                                                
                                                
=== RUN   TestFunctional/serial/InvalidService
functional_test.go:2317: (dbg) Run:  kubectl --context functional-299463 apply -f testdata/invalidsvc.yaml
functional_test.go:2331: (dbg) Run:  out/minikube-linux-amd64 service invalid-svc -p functional-299463
functional_test.go:2331: (dbg) Non-zero exit: out/minikube-linux-amd64 service invalid-svc -p functional-299463: exit status 115 (288.214732ms)

                                                
                                                
-- stdout --
	|-----------|-------------|-------------|-----------------------------|
	| NAMESPACE |    NAME     | TARGET PORT |             URL             |
	|-----------|-------------|-------------|-----------------------------|
	| default   | invalid-svc |          80 | http://192.168.39.190:30327 |
	|-----------|-------------|-------------|-----------------------------|
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to SVC_UNREACHABLE: service not available: no running pod for service invalid-svc found
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_service_96b204199e3191fa1740d4430b018a3c8028d52d_0.log                 │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
functional_test.go:2323: (dbg) Run:  kubectl --context functional-299463 delete -f testdata/invalidsvc.yaml
--- PASS: TestFunctional/serial/InvalidService (3.56s)

                                                
                                    
x
+
TestFunctional/parallel/ConfigCmd (0.34s)

                                                
                                                
=== RUN   TestFunctional/parallel/ConfigCmd
=== PAUSE TestFunctional/parallel/ConfigCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ConfigCmd
functional_test.go:1195: (dbg) Run:  out/minikube-linux-amd64 -p functional-299463 config unset cpus
functional_test.go:1195: (dbg) Run:  out/minikube-linux-amd64 -p functional-299463 config get cpus
functional_test.go:1195: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-299463 config get cpus: exit status 14 (62.123125ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
functional_test.go:1195: (dbg) Run:  out/minikube-linux-amd64 -p functional-299463 config set cpus 2
functional_test.go:1195: (dbg) Run:  out/minikube-linux-amd64 -p functional-299463 config get cpus
functional_test.go:1195: (dbg) Run:  out/minikube-linux-amd64 -p functional-299463 config unset cpus
functional_test.go:1195: (dbg) Run:  out/minikube-linux-amd64 -p functional-299463 config get cpus
functional_test.go:1195: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-299463 config get cpus: exit status 14 (48.321942ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/ConfigCmd (0.34s)

                                                
                                    
x
+
TestFunctional/parallel/DashboardCmd (12.34s)

                                                
                                                
=== RUN   TestFunctional/parallel/DashboardCmd
=== PAUSE TestFunctional/parallel/DashboardCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DashboardCmd
functional_test.go:901: (dbg) daemon: [out/minikube-linux-amd64 dashboard --url --port 36195 -p functional-299463 --alsologtostderr -v=1]
functional_test.go:906: (dbg) stopping [out/minikube-linux-amd64 dashboard --url --port 36195 -p functional-299463 --alsologtostderr -v=1] ...
helpers_test.go:508: unable to kill pid 27519: os: process already finished
E0805 23:08:16.352322   16792 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19373-9606/.minikube/profiles/addons-435364/client.crt: no such file or directory
E0805 23:08:44.038931   16792 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19373-9606/.minikube/profiles/addons-435364/client.crt: no such file or directory
--- PASS: TestFunctional/parallel/DashboardCmd (12.34s)

                                                
                                    
x
+
TestFunctional/parallel/DryRun (0.26s)

                                                
                                                
=== RUN   TestFunctional/parallel/DryRun
=== PAUSE TestFunctional/parallel/DryRun

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DryRun
functional_test.go:970: (dbg) Run:  out/minikube-linux-amd64 start -p functional-299463 --dry-run --memory 250MB --alsologtostderr --driver=kvm2  --container-runtime=crio
functional_test.go:970: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p functional-299463 --dry-run --memory 250MB --alsologtostderr --driver=kvm2  --container-runtime=crio: exit status 23 (129.305206ms)

                                                
                                                
-- stdout --
	* [functional-299463] minikube v1.33.1 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=19373
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/19373-9606/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/19373-9606/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the kvm2 driver based on existing profile
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0805 23:07:51.543300   27181 out.go:291] Setting OutFile to fd 1 ...
	I0805 23:07:51.543431   27181 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0805 23:07:51.543440   27181 out.go:304] Setting ErrFile to fd 2...
	I0805 23:07:51.543445   27181 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0805 23:07:51.543651   27181 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19373-9606/.minikube/bin
	I0805 23:07:51.544168   27181 out.go:298] Setting JSON to false
	I0805 23:07:51.545076   27181 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-10","uptime":3018,"bootTime":1722896254,"procs":208,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1065-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0805 23:07:51.545136   27181 start.go:139] virtualization: kvm guest
	I0805 23:07:51.547334   27181 out.go:177] * [functional-299463] minikube v1.33.1 on Ubuntu 20.04 (kvm/amd64)
	I0805 23:07:51.548778   27181 notify.go:220] Checking for updates...
	I0805 23:07:51.548817   27181 out.go:177]   - MINIKUBE_LOCATION=19373
	I0805 23:07:51.550446   27181 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0805 23:07:51.551841   27181 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19373-9606/kubeconfig
	I0805 23:07:51.553243   27181 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19373-9606/.minikube
	I0805 23:07:51.554618   27181 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0805 23:07:51.555915   27181 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0805 23:07:51.557801   27181 config.go:182] Loaded profile config "functional-299463": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0805 23:07:51.558388   27181 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0805 23:07:51.558467   27181 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0805 23:07:51.573284   27181 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44335
	I0805 23:07:51.573625   27181 main.go:141] libmachine: () Calling .GetVersion
	I0805 23:07:51.574126   27181 main.go:141] libmachine: Using API Version  1
	I0805 23:07:51.574151   27181 main.go:141] libmachine: () Calling .SetConfigRaw
	I0805 23:07:51.574483   27181 main.go:141] libmachine: () Calling .GetMachineName
	I0805 23:07:51.574711   27181 main.go:141] libmachine: (functional-299463) Calling .DriverName
	I0805 23:07:51.575019   27181 driver.go:392] Setting default libvirt URI to qemu:///system
	I0805 23:07:51.575353   27181 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0805 23:07:51.575390   27181 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0805 23:07:51.590148   27181 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45035
	I0805 23:07:51.590524   27181 main.go:141] libmachine: () Calling .GetVersion
	I0805 23:07:51.590990   27181 main.go:141] libmachine: Using API Version  1
	I0805 23:07:51.591011   27181 main.go:141] libmachine: () Calling .SetConfigRaw
	I0805 23:07:51.591336   27181 main.go:141] libmachine: () Calling .GetMachineName
	I0805 23:07:51.591548   27181 main.go:141] libmachine: (functional-299463) Calling .DriverName
	I0805 23:07:51.624787   27181 out.go:177] * Using the kvm2 driver based on existing profile
	I0805 23:07:51.626550   27181 start.go:297] selected driver: kvm2
	I0805 23:07:51.626567   27181 start.go:901] validating driver "kvm2" against &{Name:functional-299463 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19339/minikube-v1.33.1-1722248113-19339-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:4000 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kuber
netesVersion:v1.30.3 ClusterName:functional-299463 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.190 Port:8441 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mo
unt:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0805 23:07:51.626811   27181 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0805 23:07:51.629401   27181 out.go:177] 
	W0805 23:07:51.630999   27181 out.go:239] X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	I0805 23:07:51.632580   27181 out.go:177] 

                                                
                                                
** /stderr **
functional_test.go:987: (dbg) Run:  out/minikube-linux-amd64 start -p functional-299463 --dry-run --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio
--- PASS: TestFunctional/parallel/DryRun (0.26s)

                                                
                                    
x
+
TestFunctional/parallel/InternationalLanguage (0.14s)

                                                
                                                
=== RUN   TestFunctional/parallel/InternationalLanguage
=== PAUSE TestFunctional/parallel/InternationalLanguage

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/InternationalLanguage
functional_test.go:1016: (dbg) Run:  out/minikube-linux-amd64 start -p functional-299463 --dry-run --memory 250MB --alsologtostderr --driver=kvm2  --container-runtime=crio
functional_test.go:1016: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p functional-299463 --dry-run --memory 250MB --alsologtostderr --driver=kvm2  --container-runtime=crio: exit status 23 (135.164718ms)

                                                
                                                
-- stdout --
	* [functional-299463] minikube v1.33.1 sur Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=19373
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/19373-9606/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/19373-9606/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Utilisation du pilote kvm2 basé sur le profil existant
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0805 23:07:51.804085   27244 out.go:291] Setting OutFile to fd 1 ...
	I0805 23:07:51.804364   27244 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0805 23:07:51.804375   27244 out.go:304] Setting ErrFile to fd 2...
	I0805 23:07:51.804379   27244 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0805 23:07:51.804693   27244 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19373-9606/.minikube/bin
	I0805 23:07:51.805218   27244 out.go:298] Setting JSON to false
	I0805 23:07:51.806219   27244 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-10","uptime":3018,"bootTime":1722896254,"procs":213,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1065-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0805 23:07:51.806277   27244 start.go:139] virtualization: kvm guest
	I0805 23:07:51.808569   27244 out.go:177] * [functional-299463] minikube v1.33.1 sur Ubuntu 20.04 (kvm/amd64)
	I0805 23:07:51.810349   27244 out.go:177]   - MINIKUBE_LOCATION=19373
	I0805 23:07:51.810361   27244 notify.go:220] Checking for updates...
	I0805 23:07:51.813193   27244 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0805 23:07:51.814615   27244 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19373-9606/kubeconfig
	I0805 23:07:51.815984   27244 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19373-9606/.minikube
	I0805 23:07:51.817567   27244 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0805 23:07:51.819160   27244 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0805 23:07:51.821187   27244 config.go:182] Loaded profile config "functional-299463": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0805 23:07:51.821683   27244 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0805 23:07:51.821742   27244 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0805 23:07:51.836616   27244 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39099
	I0805 23:07:51.837075   27244 main.go:141] libmachine: () Calling .GetVersion
	I0805 23:07:51.837663   27244 main.go:141] libmachine: Using API Version  1
	I0805 23:07:51.837684   27244 main.go:141] libmachine: () Calling .SetConfigRaw
	I0805 23:07:51.838048   27244 main.go:141] libmachine: () Calling .GetMachineName
	I0805 23:07:51.838287   27244 main.go:141] libmachine: (functional-299463) Calling .DriverName
	I0805 23:07:51.838538   27244 driver.go:392] Setting default libvirt URI to qemu:///system
	I0805 23:07:51.838827   27244 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0805 23:07:51.838869   27244 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0805 23:07:51.854551   27244 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37043
	I0805 23:07:51.855244   27244 main.go:141] libmachine: () Calling .GetVersion
	I0805 23:07:51.855849   27244 main.go:141] libmachine: Using API Version  1
	I0805 23:07:51.855881   27244 main.go:141] libmachine: () Calling .SetConfigRaw
	I0805 23:07:51.856159   27244 main.go:141] libmachine: () Calling .GetMachineName
	I0805 23:07:51.856462   27244 main.go:141] libmachine: (functional-299463) Calling .DriverName
	I0805 23:07:51.889888   27244 out.go:177] * Utilisation du pilote kvm2 basé sur le profil existant
	I0805 23:07:51.891479   27244 start.go:297] selected driver: kvm2
	I0805 23:07:51.891499   27244 start.go:901] validating driver "kvm2" against &{Name:functional-299463 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19339/minikube-v1.33.1-1722248113-19339-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:4000 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kuber
netesVersion:v1.30.3 ClusterName:functional-299463 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.190 Port:8441 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mo
unt:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0805 23:07:51.891639   27244 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0805 23:07:51.893814   27244 out.go:177] 
	W0805 23:07:51.895271   27244 out.go:239] X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	I0805 23:07:51.896871   27244 out.go:177] 

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/InternationalLanguage (0.14s)

                                                
                                    
x
+
TestFunctional/parallel/StatusCmd (0.79s)

                                                
                                                
=== RUN   TestFunctional/parallel/StatusCmd
=== PAUSE TestFunctional/parallel/StatusCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/StatusCmd
functional_test.go:850: (dbg) Run:  out/minikube-linux-amd64 -p functional-299463 status
functional_test.go:856: (dbg) Run:  out/minikube-linux-amd64 -p functional-299463 status -f host:{{.Host}},kublet:{{.Kubelet}},apiserver:{{.APIServer}},kubeconfig:{{.Kubeconfig}}
functional_test.go:868: (dbg) Run:  out/minikube-linux-amd64 -p functional-299463 status -o json
--- PASS: TestFunctional/parallel/StatusCmd (0.79s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmdConnect (60.45s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmdConnect
=== PAUSE TestFunctional/parallel/ServiceCmdConnect

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ServiceCmdConnect
functional_test.go:1625: (dbg) Run:  kubectl --context functional-299463 create deployment hello-node-connect --image=registry.k8s.io/echoserver:1.8
functional_test.go:1631: (dbg) Run:  kubectl --context functional-299463 expose deployment hello-node-connect --type=NodePort --port=8080
functional_test.go:1636: (dbg) TestFunctional/parallel/ServiceCmdConnect: waiting 10m0s for pods matching "app=hello-node-connect" in namespace "default" ...
helpers_test.go:344: "hello-node-connect-57b4589c47-pb8sx" [5ba59c63-9a77-49dc-b580-b4cf2136a81e] Pending
helpers_test.go:344: "hello-node-connect-57b4589c47-pb8sx" [5ba59c63-9a77-49dc-b580-b4cf2136a81e] Pending / Ready:ContainersNotReady (containers with unready status: [echoserver]) / ContainersReady:ContainersNotReady (containers with unready status: [echoserver])
helpers_test.go:344: "hello-node-connect-57b4589c47-pb8sx" [5ba59c63-9a77-49dc-b580-b4cf2136a81e] Running
functional_test.go:1636: (dbg) TestFunctional/parallel/ServiceCmdConnect: app=hello-node-connect healthy within 1m0.005329556s
functional_test.go:1645: (dbg) Run:  out/minikube-linux-amd64 -p functional-299463 service hello-node-connect --url
functional_test.go:1651: found endpoint for hello-node-connect: http://192.168.39.190:31496
functional_test.go:1671: http://192.168.39.190:31496: success! body:

                                                
                                                

                                                
                                                
Hostname: hello-node-connect-57b4589c47-pb8sx

                                                
                                                
Pod Information:
	-no pod information available-

                                                
                                                
Server values:
	server_version=nginx: 1.13.3 - lua: 10008

                                                
                                                
Request Information:
	client_address=10.244.0.1
	method=GET
	real path=/
	query=
	request_version=1.1
	request_uri=http://192.168.39.190:8080/

                                                
                                                
Request Headers:
	accept-encoding=gzip
	host=192.168.39.190:31496
	user-agent=Go-http-client/1.1

                                                
                                                
Request Body:
	-no body in request-

                                                
                                                
--- PASS: TestFunctional/parallel/ServiceCmdConnect (60.45s)

                                                
                                    
x
+
TestFunctional/parallel/AddonsCmd (0.13s)

                                                
                                                
=== RUN   TestFunctional/parallel/AddonsCmd
=== PAUSE TestFunctional/parallel/AddonsCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/AddonsCmd
functional_test.go:1686: (dbg) Run:  out/minikube-linux-amd64 -p functional-299463 addons list
functional_test.go:1698: (dbg) Run:  out/minikube-linux-amd64 -p functional-299463 addons list -o json
--- PASS: TestFunctional/parallel/AddonsCmd (0.13s)

                                                
                                    
x
+
TestFunctional/parallel/SSHCmd (0.4s)

                                                
                                                
=== RUN   TestFunctional/parallel/SSHCmd
=== PAUSE TestFunctional/parallel/SSHCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/SSHCmd
functional_test.go:1721: (dbg) Run:  out/minikube-linux-amd64 -p functional-299463 ssh "echo hello"
functional_test.go:1738: (dbg) Run:  out/minikube-linux-amd64 -p functional-299463 ssh "cat /etc/hostname"
--- PASS: TestFunctional/parallel/SSHCmd (0.40s)

                                                
                                    
x
+
TestFunctional/parallel/CpCmd (1.28s)

                                                
                                                
=== RUN   TestFunctional/parallel/CpCmd
=== PAUSE TestFunctional/parallel/CpCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CpCmd
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p functional-299463 cp testdata/cp-test.txt /home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p functional-299463 ssh -n functional-299463 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p functional-299463 cp functional-299463:/home/docker/cp-test.txt /tmp/TestFunctionalparallelCpCmd714381329/001/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p functional-299463 ssh -n functional-299463 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p functional-299463 cp testdata/cp-test.txt /tmp/does/not/exist/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p functional-299463 ssh -n functional-299463 "sudo cat /tmp/does/not/exist/cp-test.txt"
--- PASS: TestFunctional/parallel/CpCmd (1.28s)

                                                
                                    
x
+
TestFunctional/parallel/MySQL (46.33s)

                                                
                                                
=== RUN   TestFunctional/parallel/MySQL
=== PAUSE TestFunctional/parallel/MySQL

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/MySQL
functional_test.go:1789: (dbg) Run:  kubectl --context functional-299463 replace --force -f testdata/mysql.yaml
functional_test.go:1795: (dbg) TestFunctional/parallel/MySQL: waiting 10m0s for pods matching "app=mysql" in namespace "default" ...
helpers_test.go:344: "mysql-64454c8b5c-2272b" [97b31bc8-211e-4919-bf6e-5b6f10bdb0bc] Pending
helpers_test.go:344: "mysql-64454c8b5c-2272b" [97b31bc8-211e-4919-bf6e-5b6f10bdb0bc] Pending / Ready:ContainersNotReady (containers with unready status: [mysql]) / ContainersReady:ContainersNotReady (containers with unready status: [mysql])
helpers_test.go:344: "mysql-64454c8b5c-2272b" [97b31bc8-211e-4919-bf6e-5b6f10bdb0bc] Running
functional_test.go:1795: (dbg) TestFunctional/parallel/MySQL: app=mysql healthy within 46.00370355s
functional_test.go:1803: (dbg) Run:  kubectl --context functional-299463 exec mysql-64454c8b5c-2272b -- mysql -ppassword -e "show databases;"
--- PASS: TestFunctional/parallel/MySQL (46.33s)

                                                
                                    
x
+
TestFunctional/parallel/FileSync (0.19s)

                                                
                                                
=== RUN   TestFunctional/parallel/FileSync
=== PAUSE TestFunctional/parallel/FileSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/FileSync
functional_test.go:1925: Checking for existence of /etc/test/nested/copy/16792/hosts within VM
functional_test.go:1927: (dbg) Run:  out/minikube-linux-amd64 -p functional-299463 ssh "sudo cat /etc/test/nested/copy/16792/hosts"
functional_test.go:1932: file sync test content: Test file for checking file sync process
--- PASS: TestFunctional/parallel/FileSync (0.19s)

                                                
                                    
x
+
TestFunctional/parallel/CertSync (1.27s)

                                                
                                                
=== RUN   TestFunctional/parallel/CertSync
=== PAUSE TestFunctional/parallel/CertSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CertSync
functional_test.go:1968: Checking for existence of /etc/ssl/certs/16792.pem within VM
functional_test.go:1969: (dbg) Run:  out/minikube-linux-amd64 -p functional-299463 ssh "sudo cat /etc/ssl/certs/16792.pem"
functional_test.go:1968: Checking for existence of /usr/share/ca-certificates/16792.pem within VM
functional_test.go:1969: (dbg) Run:  out/minikube-linux-amd64 -p functional-299463 ssh "sudo cat /usr/share/ca-certificates/16792.pem"
functional_test.go:1968: Checking for existence of /etc/ssl/certs/51391683.0 within VM
functional_test.go:1969: (dbg) Run:  out/minikube-linux-amd64 -p functional-299463 ssh "sudo cat /etc/ssl/certs/51391683.0"
functional_test.go:1995: Checking for existence of /etc/ssl/certs/167922.pem within VM
functional_test.go:1996: (dbg) Run:  out/minikube-linux-amd64 -p functional-299463 ssh "sudo cat /etc/ssl/certs/167922.pem"
functional_test.go:1995: Checking for existence of /usr/share/ca-certificates/167922.pem within VM
functional_test.go:1996: (dbg) Run:  out/minikube-linux-amd64 -p functional-299463 ssh "sudo cat /usr/share/ca-certificates/167922.pem"
functional_test.go:1995: Checking for existence of /etc/ssl/certs/3ec20f2e.0 within VM
functional_test.go:1996: (dbg) Run:  out/minikube-linux-amd64 -p functional-299463 ssh "sudo cat /etc/ssl/certs/3ec20f2e.0"
--- PASS: TestFunctional/parallel/CertSync (1.27s)

                                                
                                    
x
+
TestFunctional/parallel/NodeLabels (0.06s)

                                                
                                                
=== RUN   TestFunctional/parallel/NodeLabels
=== PAUSE TestFunctional/parallel/NodeLabels

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NodeLabels
functional_test.go:218: (dbg) Run:  kubectl --context functional-299463 get nodes --output=go-template "--template='{{range $k, $v := (index .items 0).metadata.labels}}{{$k}} {{end}}'"
--- PASS: TestFunctional/parallel/NodeLabels (0.06s)

                                                
                                    
x
+
TestFunctional/parallel/NonActiveRuntimeDisabled (0.38s)

                                                
                                                
=== RUN   TestFunctional/parallel/NonActiveRuntimeDisabled
=== PAUSE TestFunctional/parallel/NonActiveRuntimeDisabled

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NonActiveRuntimeDisabled
functional_test.go:2023: (dbg) Run:  out/minikube-linux-amd64 -p functional-299463 ssh "sudo systemctl is-active docker"
functional_test.go:2023: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-299463 ssh "sudo systemctl is-active docker": exit status 1 (195.672506ms)

                                                
                                                
-- stdout --
	inactive

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
functional_test.go:2023: (dbg) Run:  out/minikube-linux-amd64 -p functional-299463 ssh "sudo systemctl is-active containerd"
functional_test.go:2023: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-299463 ssh "sudo systemctl is-active containerd": exit status 1 (183.895678ms)

                                                
                                                
-- stdout --
	inactive

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/NonActiveRuntimeDisabled (0.38s)

                                                
                                    
x
+
TestFunctional/parallel/License (0.61s)

                                                
                                                
=== RUN   TestFunctional/parallel/License
=== PAUSE TestFunctional/parallel/License

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/License
functional_test.go:2284: (dbg) Run:  out/minikube-linux-amd64 license
--- PASS: TestFunctional/parallel/License (0.61s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_changes (0.09s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_changes
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_changes

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_changes
functional_test.go:2115: (dbg) Run:  out/minikube-linux-amd64 -p functional-299463 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_changes (0.09s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.09s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
functional_test.go:2115: (dbg) Run:  out/minikube-linux-amd64 -p functional-299463 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.09s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_clusters (0.09s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_clusters
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_clusters

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_clusters
functional_test.go:2115: (dbg) Run:  out/minikube-linux-amd64 -p functional-299463 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_clusters (0.09s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/DeployApp (61.21s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/DeployApp
functional_test.go:1435: (dbg) Run:  kubectl --context functional-299463 create deployment hello-node --image=registry.k8s.io/echoserver:1.8
functional_test.go:1441: (dbg) Run:  kubectl --context functional-299463 expose deployment hello-node --type=NodePort --port=8080
functional_test.go:1446: (dbg) TestFunctional/parallel/ServiceCmd/DeployApp: waiting 10m0s for pods matching "app=hello-node" in namespace "default" ...
helpers_test.go:344: "hello-node-6d85cfcfd8-bvls2" [545b8cbe-5efb-4d9c-bcff-6fa15814afeb] Pending
helpers_test.go:344: "hello-node-6d85cfcfd8-bvls2" [545b8cbe-5efb-4d9c-bcff-6fa15814afeb] Pending / Ready:ContainersNotReady (containers with unready status: [echoserver]) / ContainersReady:ContainersNotReady (containers with unready status: [echoserver])
helpers_test.go:344: "hello-node-6d85cfcfd8-bvls2" [545b8cbe-5efb-4d9c-bcff-6fa15814afeb] Running
functional_test.go:1446: (dbg) TestFunctional/parallel/ServiceCmd/DeployApp: app=hello-node healthy within 1m1.004053341s
--- PASS: TestFunctional/parallel/ServiceCmd/DeployApp (61.21s)

                                                
                                    
x
+
TestFunctional/parallel/Version/short (0.04s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/short
=== PAUSE TestFunctional/parallel/Version/short

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/short
functional_test.go:2252: (dbg) Run:  out/minikube-linux-amd64 -p functional-299463 version --short
--- PASS: TestFunctional/parallel/Version/short (0.04s)

                                                
                                    
x
+
TestFunctional/parallel/Version/components (0.82s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/components
=== PAUSE TestFunctional/parallel/Version/components

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/components
functional_test.go:2266: (dbg) Run:  out/minikube-linux-amd64 -p functional-299463 version -o=json --components
--- PASS: TestFunctional/parallel/Version/components (0.82s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListShort (0.21s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListShort
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListShort

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListShort
functional_test.go:260: (dbg) Run:  out/minikube-linux-amd64 -p functional-299463 image ls --format short --alsologtostderr
functional_test.go:265: (dbg) Stdout: out/minikube-linux-amd64 -p functional-299463 image ls --format short --alsologtostderr:
registry.k8s.io/pause:latest
registry.k8s.io/pause:3.9
registry.k8s.io/pause:3.3
registry.k8s.io/pause:3.1
registry.k8s.io/kube-scheduler:v1.30.3
registry.k8s.io/kube-proxy:v1.30.3
registry.k8s.io/kube-controller-manager:v1.30.3
registry.k8s.io/kube-apiserver:v1.30.3
registry.k8s.io/etcd:3.5.12-0
registry.k8s.io/echoserver:1.8
registry.k8s.io/coredns/coredns:v1.11.1
localhost/minikube-local-cache-test:functional-299463
gcr.io/k8s-minikube/storage-provisioner:v5
gcr.io/k8s-minikube/busybox:1.28.4-glibc
docker.io/library/mysql:5.7
docker.io/kindest/kindnetd:v20240715-585640e9
docker.io/kicbase/echo-server:functional-299463
functional_test.go:268: (dbg) Stderr: out/minikube-linux-amd64 -p functional-299463 image ls --format short --alsologtostderr:
I0805 23:07:53.948344   27529 out.go:291] Setting OutFile to fd 1 ...
I0805 23:07:53.948451   27529 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0805 23:07:53.948459   27529 out.go:304] Setting ErrFile to fd 2...
I0805 23:07:53.948464   27529 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0805 23:07:53.948623   27529 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19373-9606/.minikube/bin
I0805 23:07:53.949189   27529 config.go:182] Loaded profile config "functional-299463": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
I0805 23:07:53.949283   27529 config.go:182] Loaded profile config "functional-299463": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
I0805 23:07:53.949633   27529 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I0805 23:07:53.949671   27529 main.go:141] libmachine: Launching plugin server for driver kvm2
I0805 23:07:53.964463   27529 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41937
I0805 23:07:53.964937   27529 main.go:141] libmachine: () Calling .GetVersion
I0805 23:07:53.965479   27529 main.go:141] libmachine: Using API Version  1
I0805 23:07:53.965505   27529 main.go:141] libmachine: () Calling .SetConfigRaw
I0805 23:07:53.965803   27529 main.go:141] libmachine: () Calling .GetMachineName
I0805 23:07:53.966042   27529 main.go:141] libmachine: (functional-299463) Calling .GetState
I0805 23:07:53.967843   27529 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I0805 23:07:53.967874   27529 main.go:141] libmachine: Launching plugin server for driver kvm2
I0805 23:07:53.984013   27529 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43429
I0805 23:07:53.984441   27529 main.go:141] libmachine: () Calling .GetVersion
I0805 23:07:53.984912   27529 main.go:141] libmachine: Using API Version  1
I0805 23:07:53.984934   27529 main.go:141] libmachine: () Calling .SetConfigRaw
I0805 23:07:53.985294   27529 main.go:141] libmachine: () Calling .GetMachineName
I0805 23:07:53.985538   27529 main.go:141] libmachine: (functional-299463) Calling .DriverName
I0805 23:07:53.985750   27529 ssh_runner.go:195] Run: systemctl --version
I0805 23:07:53.985773   27529 main.go:141] libmachine: (functional-299463) Calling .GetSSHHostname
I0805 23:07:53.988556   27529 main.go:141] libmachine: (functional-299463) DBG | domain functional-299463 has defined MAC address 52:54:00:bb:f2:19 in network mk-functional-299463
I0805 23:07:53.989106   27529 main.go:141] libmachine: (functional-299463) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:bb:f2:19", ip: ""} in network mk-functional-299463: {Iface:virbr1 ExpiryTime:2024-08-06 00:03:29 +0000 UTC Type:0 Mac:52:54:00:bb:f2:19 Iaid: IPaddr:192.168.39.190 Prefix:24 Hostname:functional-299463 Clientid:01:52:54:00:bb:f2:19}
I0805 23:07:53.989144   27529 main.go:141] libmachine: (functional-299463) DBG | domain functional-299463 has defined IP address 192.168.39.190 and MAC address 52:54:00:bb:f2:19 in network mk-functional-299463
I0805 23:07:53.989232   27529 main.go:141] libmachine: (functional-299463) Calling .GetSSHPort
I0805 23:07:53.989434   27529 main.go:141] libmachine: (functional-299463) Calling .GetSSHKeyPath
I0805 23:07:53.989603   27529 main.go:141] libmachine: (functional-299463) Calling .GetSSHUsername
I0805 23:07:53.989711   27529 sshutil.go:53] new ssh client: &{IP:192.168.39.190 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19373-9606/.minikube/machines/functional-299463/id_rsa Username:docker}
I0805 23:07:54.070370   27529 ssh_runner.go:195] Run: sudo crictl images --output json
I0805 23:07:54.110029   27529 main.go:141] libmachine: Making call to close driver server
I0805 23:07:54.110040   27529 main.go:141] libmachine: (functional-299463) Calling .Close
I0805 23:07:54.110340   27529 main.go:141] libmachine: Successfully made call to close driver server
I0805 23:07:54.110374   27529 main.go:141] libmachine: (functional-299463) DBG | Closing plugin on server side
I0805 23:07:54.110390   27529 main.go:141] libmachine: Making call to close connection to plugin binary
I0805 23:07:54.110405   27529 main.go:141] libmachine: Making call to close driver server
I0805 23:07:54.110417   27529 main.go:141] libmachine: (functional-299463) Calling .Close
I0805 23:07:54.110668   27529 main.go:141] libmachine: Successfully made call to close driver server
I0805 23:07:54.110690   27529 main.go:141] libmachine: Making call to close connection to plugin binary
--- PASS: TestFunctional/parallel/ImageCommands/ImageListShort (0.21s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListTable (0.25s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListTable
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListTable

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListTable
functional_test.go:260: (dbg) Run:  out/minikube-linux-amd64 -p functional-299463 image ls --format table --alsologtostderr
functional_test.go:265: (dbg) Stdout: out/minikube-linux-amd64 -p functional-299463 image ls --format table --alsologtostderr:
|-----------------------------------------|--------------------|---------------|--------|
|                  Image                  |        Tag         |   Image ID    |  Size  |
|-----------------------------------------|--------------------|---------------|--------|
| gcr.io/k8s-minikube/busybox             | 1.28.4-glibc       | 56cc512116c8f | 4.63MB |
| registry.k8s.io/echoserver              | 1.8                | 82e4c8a736a4f | 97.8MB |
| registry.k8s.io/etcd                    | 3.5.12-0           | 3861cfcd7c04c | 151MB  |
| docker.io/kindest/kindnetd              | v20240715-585640e9 | 5cc3abe5717db | 87.2MB |
| localhost/minikube-local-cache-test     | functional-299463  | dffcbfa6c5ae5 | 3.33kB |
| registry.k8s.io/coredns/coredns         | v1.11.1            | cbb01a7bd410d | 61.2MB |
| gcr.io/k8s-minikube/storage-provisioner | v5                 | 6e38f40d628db | 31.5MB |
| registry.k8s.io/kube-controller-manager | v1.30.3            | 76932a3b37d7e | 112MB  |
| registry.k8s.io/pause                   | 3.1                | da86e6ba6ca19 | 747kB  |
| docker.io/kicbase/echo-server           | functional-299463  | 9056ab77afb8e | 4.94MB |
| docker.io/library/mysql                 | 5.7                | 5107333e08a87 | 520MB  |
| gcr.io/k8s-minikube/busybox             | latest             | beae173ccac6a | 1.46MB |
| localhost/my-image                      | functional-299463  | 75d5e26649824 | 1.47MB |
| registry.k8s.io/kube-apiserver          | v1.30.3            | 1f6d574d502f3 | 118MB  |
| registry.k8s.io/kube-proxy              | v1.30.3            | 55bb025d2cfa5 | 86MB   |
| registry.k8s.io/kube-scheduler          | v1.30.3            | 3edc18e7b7672 | 63.1MB |
| registry.k8s.io/pause                   | 3.3                | 0184c1613d929 | 686kB  |
| registry.k8s.io/pause                   | 3.9                | e6f1816883972 | 750kB  |
| registry.k8s.io/pause                   | latest             | 350b164e7ae1d | 247kB  |
|-----------------------------------------|--------------------|---------------|--------|
functional_test.go:268: (dbg) Stderr: out/minikube-linux-amd64 -p functional-299463 image ls --format table --alsologtostderr:
I0805 23:07:57.916772   27880 out.go:291] Setting OutFile to fd 1 ...
I0805 23:07:57.916889   27880 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0805 23:07:57.916898   27880 out.go:304] Setting ErrFile to fd 2...
I0805 23:07:57.916902   27880 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0805 23:07:57.917094   27880 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19373-9606/.minikube/bin
I0805 23:07:57.917612   27880 config.go:182] Loaded profile config "functional-299463": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
I0805 23:07:57.917701   27880 config.go:182] Loaded profile config "functional-299463": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
I0805 23:07:57.918034   27880 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I0805 23:07:57.918090   27880 main.go:141] libmachine: Launching plugin server for driver kvm2
I0805 23:07:57.933349   27880 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34231
I0805 23:07:57.933841   27880 main.go:141] libmachine: () Calling .GetVersion
I0805 23:07:57.934449   27880 main.go:141] libmachine: Using API Version  1
I0805 23:07:57.934473   27880 main.go:141] libmachine: () Calling .SetConfigRaw
I0805 23:07:57.934998   27880 main.go:141] libmachine: () Calling .GetMachineName
I0805 23:07:57.935193   27880 main.go:141] libmachine: (functional-299463) Calling .GetState
I0805 23:07:57.937160   27880 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I0805 23:07:57.937202   27880 main.go:141] libmachine: Launching plugin server for driver kvm2
I0805 23:07:57.953928   27880 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:32779
I0805 23:07:57.954407   27880 main.go:141] libmachine: () Calling .GetVersion
I0805 23:07:57.954910   27880 main.go:141] libmachine: Using API Version  1
I0805 23:07:57.954934   27880 main.go:141] libmachine: () Calling .SetConfigRaw
I0805 23:07:57.955318   27880 main.go:141] libmachine: () Calling .GetMachineName
I0805 23:07:57.955505   27880 main.go:141] libmachine: (functional-299463) Calling .DriverName
I0805 23:07:57.955721   27880 ssh_runner.go:195] Run: systemctl --version
I0805 23:07:57.955749   27880 main.go:141] libmachine: (functional-299463) Calling .GetSSHHostname
I0805 23:07:57.959014   27880 main.go:141] libmachine: (functional-299463) DBG | domain functional-299463 has defined MAC address 52:54:00:bb:f2:19 in network mk-functional-299463
I0805 23:07:57.959475   27880 main.go:141] libmachine: (functional-299463) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:bb:f2:19", ip: ""} in network mk-functional-299463: {Iface:virbr1 ExpiryTime:2024-08-06 00:03:29 +0000 UTC Type:0 Mac:52:54:00:bb:f2:19 Iaid: IPaddr:192.168.39.190 Prefix:24 Hostname:functional-299463 Clientid:01:52:54:00:bb:f2:19}
I0805 23:07:57.959509   27880 main.go:141] libmachine: (functional-299463) DBG | domain functional-299463 has defined IP address 192.168.39.190 and MAC address 52:54:00:bb:f2:19 in network mk-functional-299463
I0805 23:07:57.959663   27880 main.go:141] libmachine: (functional-299463) Calling .GetSSHPort
I0805 23:07:57.959835   27880 main.go:141] libmachine: (functional-299463) Calling .GetSSHKeyPath
I0805 23:07:57.960059   27880 main.go:141] libmachine: (functional-299463) Calling .GetSSHUsername
I0805 23:07:57.960287   27880 sshutil.go:53] new ssh client: &{IP:192.168.39.190 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19373-9606/.minikube/machines/functional-299463/id_rsa Username:docker}
I0805 23:07:58.057432   27880 ssh_runner.go:195] Run: sudo crictl images --output json
I0805 23:07:58.118564   27880 main.go:141] libmachine: Making call to close driver server
I0805 23:07:58.118597   27880 main.go:141] libmachine: (functional-299463) Calling .Close
I0805 23:07:58.118864   27880 main.go:141] libmachine: Successfully made call to close driver server
I0805 23:07:58.118897   27880 main.go:141] libmachine: Making call to close connection to plugin binary
I0805 23:07:58.118913   27880 main.go:141] libmachine: Making call to close driver server
I0805 23:07:58.118920   27880 main.go:141] libmachine: (functional-299463) Calling .Close
I0805 23:07:58.118918   27880 main.go:141] libmachine: (functional-299463) DBG | Closing plugin on server side
I0805 23:07:58.119252   27880 main.go:141] libmachine: Successfully made call to close driver server
I0805 23:07:58.119269   27880 main.go:141] libmachine: Making call to close connection to plugin binary
I0805 23:07:58.119291   27880 main.go:141] libmachine: (functional-299463) DBG | Closing plugin on server side
--- PASS: TestFunctional/parallel/ImageCommands/ImageListTable (0.25s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListJson (0.29s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListJson
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListJson

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListJson
functional_test.go:260: (dbg) Run:  out/minikube-linux-amd64 -p functional-299463 image ls --format json --alsologtostderr
functional_test.go:265: (dbg) Stdout: out/minikube-linux-amd64 -p functional-299463 image ls --format json --alsologtostderr:
[{"id":"5cc3abe5717dbf8e0db6fc2e890c3f82fe95e2ce5c5d7320f98a5b71c767d42f","repoDigests":["docker.io/kindest/kindnetd@sha256:3b93f681916ee780a9941d48cb20622486c08af54f8d87d801412bcca0832115","docker.io/kindest/kindnetd@sha256:88ed2adbc140254762f98fad7f4b16d279117356ebaf95aebf191713c828a493"],"repoTags":["docker.io/kindest/kindnetd:v20240715-585640e9"],"size":"87165492"},{"id":"5107333e08a87b836d48ff7528b1e84b9c86781cc9f1748bbc1b8c42a870d933","repoDigests":["docker.io/library/mysql@sha256:4bc6bc963e6d8443453676cae56536f4b8156d78bae03c0145cbe47c2aad73bb","docker.io/library/mysql@sha256:dab0a802b44617303694fb17d166501de279c3031ddeb28c56ecf7fcab5ef0da"],"repoTags":["docker.io/library/mysql:5.7"],"size":"519571821"},{"id":"1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d","repoDigests":["registry.k8s.io/kube-apiserver@sha256:a36d558835e48950f6d13b1edbe20605b8dfbc81e088f58221796631e107966c","registry.k8s.io/kube-apiserver@sha256:a3a6c80030a6e720734ae3291448388f70b6f1d463f103e4f06f358f8a170315
"],"repoTags":["registry.k8s.io/kube-apiserver:v1.30.3"],"size":"117609954"},{"id":"e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c","repoDigests":["registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097","registry.k8s.io/pause@sha256:8d4106c88ec0bd28001e34c975d65175d994072d65341f62a8ab0754b0fafe10"],"repoTags":["registry.k8s.io/pause:3.9"],"size":"750414"},{"id":"9056ab77afb8e18e04303f11000a9d31b3f16b74c59475b899ae1b342d328d30","repoDigests":["docker.io/kicbase/echo-server@sha256:d3d0b737c6413dcf7b9393d61285525048f2d10a0aae68296150078d379c30cf"],"repoTags":["docker.io/kicbase/echo-server:functional-299463"],"size":"4943877"},{"id":"75d5e26649824dc6d5fd049af549bf09b614fae2b62d99e4bc24d7d78cfdc288","repoDigests":["localhost/my-image@sha256:296ef0042119a22046add83bd7121e6166c30cb87dcaa881d8beba1ce6bd46a6"],"repoTags":["localhost/my-image:functional-299463"],"size":"1468600"},{"id":"82e4c8a736a4fcf22b5ef9f6a4ff6207064c7187d7694bf97bd561605a538410","repoDige
sts":["registry.k8s.io/echoserver@sha256:cb3386f863f6a4b05f33c191361723f9d5927ac287463b1bea633bf859475969"],"repoTags":["registry.k8s.io/echoserver:1.8"],"size":"97846543"},{"id":"6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562","repoDigests":["gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944","gcr.io/k8s-minikube/storage-provisioner@sha256:c4c05d6ad6c0f24d87b39e596d4dddf64bec3e0d84f5b36e4511d4ebf583f38f"],"repoTags":["gcr.io/k8s-minikube/storage-provisioner:v5"],"size":"31470524"},{"id":"dffcbfa6c5ae5e7bbca8ae1e8abee8d133025c3f192fe5d50cb6ef1bee6ddeeb","repoDigests":["localhost/minikube-local-cache-test@sha256:5ee6bef0fc938bf4c2dee363eb9f7224b95fa24e5b53b75caa658779bc3eed14"],"repoTags":["localhost/minikube-local-cache-test:functional-299463"],"size":"3330"},{"id":"cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4","repoDigests":["registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d49
17f1a5822a73b75db1","registry.k8s.io/coredns/coredns@sha256:2169b3b96af988cf69d7dd69efbcc59433eb027320eb185c6110e0850b997870"],"repoTags":["registry.k8s.io/coredns/coredns:v1.11.1"],"size":"61245718"},{"id":"da86e6ba6ca197bf6bc5e9d900febd906b133eaa4750e6bed647b0fbe50ed43e","repoDigests":["registry.k8s.io/pause@sha256:84805ddcaaae94434d8eacb7e843f549ec1da0cd277787b97ad9d9ac2cea929e"],"repoTags":["registry.k8s.io/pause:3.1"],"size":"746911"},{"id":"0184c1613d92931126feb4c548e5da11015513b9e4c104e7305ee8b53b50a9da","repoDigests":["registry.k8s.io/pause@sha256:1000de19145c53d83aab989956fa8fca08dcbcc5b0208bdc193517905e6ccd04"],"repoTags":["registry.k8s.io/pause:3.3"],"size":"686139"},{"id":"7ceffab290bdb104de86f6eb67a6aa581de42c619afb4900c8009f3e9f48b11a","repoDigests":["docker.io/library/0dfd40cc3d90a32a0e2c8b5dc0ab0e89150e993f83c51ea77049c5456946b48d-tmp@sha256:4a89a4a57dbd7994122e0b781d7cdce6b23014e74ad79cab4802d29dacf49535"],"repoTags":[],"size":"1466018"},{"id":"56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bd
b1faaac7c020289c","repoDigests":["gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e","gcr.io/k8s-minikube/busybox@sha256:a85c92d5aa82aa6db0f92e5af591c2670a60a762da6bdfec52d960d55295f998"],"repoTags":["gcr.io/k8s-minikube/busybox:1.28.4-glibc"],"size":"4631262"},{"id":"beae173ccac6ad749f76713cf4440fe3d21d1043fe616dfbe30775815d1d0f6a","repoDigests":["gcr.io/k8s-minikube/busybox@sha256:62ffc2ed7554e4c6d360bce40bbcf196573dd27c4ce080641a2c59867e732dee","gcr.io/k8s-minikube/busybox@sha256:ca5ae90100d50772da31f3b5016209e25ad61972404e2ccd83d44f10dee7e79b"],"repoTags":["gcr.io/k8s-minikube/busybox:latest"],"size":"1462480"},{"id":"3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2","repoDigests":["registry.k8s.io/kube-scheduler@sha256:1738178fb116d10e7cde2cfc3671f5dfdad518d773677af740483f2dfe674266","registry.k8s.io/kube-scheduler@sha256:2147ab5d2c73dd84e28332fcbee6826d1648eed30a531a52a96501b37d7ee4e4"],"repoTags":["registry.k8s.io/kube-scheduler:v1.30.3"
],"size":"63051080"},{"id":"350b164e7ae1dcddeffadd65c76226c9b6dc5553f5179153fb0e36b78f2a5e06","repoDigests":["registry.k8s.io/pause@sha256:5bcb06ed43da4a16c6e6e33898eb0506e940bd66822659ecf0a898bbb0da7cb9"],"repoTags":["registry.k8s.io/pause:latest"],"size":"247077"},{"id":"3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899","repoDigests":["registry.k8s.io/etcd@sha256:2e6b9c67730f1f1dce4c6e16d60135e00608728567f537e8ff70c244756cbb62","registry.k8s.io/etcd@sha256:44a8e24dcbba3470ee1fee21d5e88d128c936e9b55d4bc51fbef8086f8ed123b"],"repoTags":["registry.k8s.io/etcd:3.5.12-0"],"size":"150779692"},{"id":"76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e","repoDigests":["registry.k8s.io/kube-controller-manager@sha256:eff43da55a29a5e66ec9480f28233d733a6a8433b7a46f6e8c07086fa4ef69b7","registry.k8s.io/kube-controller-manager@sha256:fa179d147c6bacddd1586f6d12ff79a844e951c7b159fdcb92cdf56f3033d91e"],"repoTags":["registry.k8s.io/kube-controller-manager:v1.30.3"],"size":"112198984"},{"id":"55bb02
5d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1","repoDigests":["registry.k8s.io/kube-proxy@sha256:8c178447597867a03bbcdf0d1ce43fc8f6807ead2321bd1ec0e845a2f12dad80","registry.k8s.io/kube-proxy@sha256:b26e535e8ee1cbd7dc5642fb61bd36e9d23f32e9242ae0010b2905656e664f65"],"repoTags":["registry.k8s.io/kube-proxy:v1.30.3"],"size":"85953945"}]
functional_test.go:268: (dbg) Stderr: out/minikube-linux-amd64 -p functional-299463 image ls --format json --alsologtostderr:
I0805 23:07:58.166013   27905 out.go:291] Setting OutFile to fd 1 ...
I0805 23:07:58.166121   27905 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0805 23:07:58.166129   27905 out.go:304] Setting ErrFile to fd 2...
I0805 23:07:58.166134   27905 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0805 23:07:58.166309   27905 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19373-9606/.minikube/bin
I0805 23:07:58.166848   27905 config.go:182] Loaded profile config "functional-299463": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
I0805 23:07:58.166941   27905 config.go:182] Loaded profile config "functional-299463": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
I0805 23:07:58.167358   27905 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I0805 23:07:58.167399   27905 main.go:141] libmachine: Launching plugin server for driver kvm2
I0805 23:07:58.183551   27905 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35277
I0805 23:07:58.184048   27905 main.go:141] libmachine: () Calling .GetVersion
I0805 23:07:58.184671   27905 main.go:141] libmachine: Using API Version  1
I0805 23:07:58.184697   27905 main.go:141] libmachine: () Calling .SetConfigRaw
I0805 23:07:58.185037   27905 main.go:141] libmachine: () Calling .GetMachineName
I0805 23:07:58.185280   27905 main.go:141] libmachine: (functional-299463) Calling .GetState
I0805 23:07:58.187390   27905 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I0805 23:07:58.187432   27905 main.go:141] libmachine: Launching plugin server for driver kvm2
I0805 23:07:58.205553   27905 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33617
I0805 23:07:58.206031   27905 main.go:141] libmachine: () Calling .GetVersion
I0805 23:07:58.206606   27905 main.go:141] libmachine: Using API Version  1
I0805 23:07:58.206629   27905 main.go:141] libmachine: () Calling .SetConfigRaw
I0805 23:07:58.207028   27905 main.go:141] libmachine: () Calling .GetMachineName
I0805 23:07:58.207248   27905 main.go:141] libmachine: (functional-299463) Calling .DriverName
I0805 23:07:58.207459   27905 ssh_runner.go:195] Run: systemctl --version
I0805 23:07:58.207496   27905 main.go:141] libmachine: (functional-299463) Calling .GetSSHHostname
I0805 23:07:58.210556   27905 main.go:141] libmachine: (functional-299463) DBG | domain functional-299463 has defined MAC address 52:54:00:bb:f2:19 in network mk-functional-299463
I0805 23:07:58.210978   27905 main.go:141] libmachine: (functional-299463) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:bb:f2:19", ip: ""} in network mk-functional-299463: {Iface:virbr1 ExpiryTime:2024-08-06 00:03:29 +0000 UTC Type:0 Mac:52:54:00:bb:f2:19 Iaid: IPaddr:192.168.39.190 Prefix:24 Hostname:functional-299463 Clientid:01:52:54:00:bb:f2:19}
I0805 23:07:58.211010   27905 main.go:141] libmachine: (functional-299463) DBG | domain functional-299463 has defined IP address 192.168.39.190 and MAC address 52:54:00:bb:f2:19 in network mk-functional-299463
I0805 23:07:58.211188   27905 main.go:141] libmachine: (functional-299463) Calling .GetSSHPort
I0805 23:07:58.211397   27905 main.go:141] libmachine: (functional-299463) Calling .GetSSHKeyPath
I0805 23:07:58.211549   27905 main.go:141] libmachine: (functional-299463) Calling .GetSSHUsername
I0805 23:07:58.211717   27905 sshutil.go:53] new ssh client: &{IP:192.168.39.190 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19373-9606/.minikube/machines/functional-299463/id_rsa Username:docker}
I0805 23:07:58.316825   27905 ssh_runner.go:195] Run: sudo crictl images --output json
I0805 23:07:58.405551   27905 main.go:141] libmachine: Making call to close driver server
I0805 23:07:58.405567   27905 main.go:141] libmachine: (functional-299463) Calling .Close
I0805 23:07:58.405851   27905 main.go:141] libmachine: Successfully made call to close driver server
I0805 23:07:58.405867   27905 main.go:141] libmachine: Making call to close connection to plugin binary
I0805 23:07:58.405884   27905 main.go:141] libmachine: Making call to close driver server
I0805 23:07:58.405894   27905 main.go:141] libmachine: (functional-299463) Calling .Close
I0805 23:07:58.406133   27905 main.go:141] libmachine: Successfully made call to close driver server
I0805 23:07:58.406147   27905 main.go:141] libmachine: Making call to close connection to plugin binary
I0805 23:07:58.406171   27905 main.go:141] libmachine: (functional-299463) DBG | Closing plugin on server side
--- PASS: TestFunctional/parallel/ImageCommands/ImageListJson (0.29s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListYaml (0.2s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListYaml
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListYaml

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListYaml
functional_test.go:260: (dbg) Run:  out/minikube-linux-amd64 -p functional-299463 image ls --format yaml --alsologtostderr
functional_test.go:265: (dbg) Stdout: out/minikube-linux-amd64 -p functional-299463 image ls --format yaml --alsologtostderr:
- id: 6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562
repoDigests:
- gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944
- gcr.io/k8s-minikube/storage-provisioner@sha256:c4c05d6ad6c0f24d87b39e596d4dddf64bec3e0d84f5b36e4511d4ebf583f38f
repoTags:
- gcr.io/k8s-minikube/storage-provisioner:v5
size: "31470524"
- id: cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4
repoDigests:
- registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1
- registry.k8s.io/coredns/coredns@sha256:2169b3b96af988cf69d7dd69efbcc59433eb027320eb185c6110e0850b997870
repoTags:
- registry.k8s.io/coredns/coredns:v1.11.1
size: "61245718"
- id: 82e4c8a736a4fcf22b5ef9f6a4ff6207064c7187d7694bf97bd561605a538410
repoDigests:
- registry.k8s.io/echoserver@sha256:cb3386f863f6a4b05f33c191361723f9d5927ac287463b1bea633bf859475969
repoTags:
- registry.k8s.io/echoserver:1.8
size: "97846543"
- id: e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c
repoDigests:
- registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097
- registry.k8s.io/pause@sha256:8d4106c88ec0bd28001e34c975d65175d994072d65341f62a8ab0754b0fafe10
repoTags:
- registry.k8s.io/pause:3.9
size: "750414"
- id: 5cc3abe5717dbf8e0db6fc2e890c3f82fe95e2ce5c5d7320f98a5b71c767d42f
repoDigests:
- docker.io/kindest/kindnetd@sha256:3b93f681916ee780a9941d48cb20622486c08af54f8d87d801412bcca0832115
- docker.io/kindest/kindnetd@sha256:88ed2adbc140254762f98fad7f4b16d279117356ebaf95aebf191713c828a493
repoTags:
- docker.io/kindest/kindnetd:v20240715-585640e9
size: "87165492"
- id: 56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c
repoDigests:
- gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e
- gcr.io/k8s-minikube/busybox@sha256:a85c92d5aa82aa6db0f92e5af591c2670a60a762da6bdfec52d960d55295f998
repoTags:
- gcr.io/k8s-minikube/busybox:1.28.4-glibc
size: "4631262"
- id: dffcbfa6c5ae5e7bbca8ae1e8abee8d133025c3f192fe5d50cb6ef1bee6ddeeb
repoDigests:
- localhost/minikube-local-cache-test@sha256:5ee6bef0fc938bf4c2dee363eb9f7224b95fa24e5b53b75caa658779bc3eed14
repoTags:
- localhost/minikube-local-cache-test:functional-299463
size: "3330"
- id: 1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d
repoDigests:
- registry.k8s.io/kube-apiserver@sha256:a36d558835e48950f6d13b1edbe20605b8dfbc81e088f58221796631e107966c
- registry.k8s.io/kube-apiserver@sha256:a3a6c80030a6e720734ae3291448388f70b6f1d463f103e4f06f358f8a170315
repoTags:
- registry.k8s.io/kube-apiserver:v1.30.3
size: "117609954"
- id: 76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e
repoDigests:
- registry.k8s.io/kube-controller-manager@sha256:eff43da55a29a5e66ec9480f28233d733a6a8433b7a46f6e8c07086fa4ef69b7
- registry.k8s.io/kube-controller-manager@sha256:fa179d147c6bacddd1586f6d12ff79a844e951c7b159fdcb92cdf56f3033d91e
repoTags:
- registry.k8s.io/kube-controller-manager:v1.30.3
size: "112198984"
- id: da86e6ba6ca197bf6bc5e9d900febd906b133eaa4750e6bed647b0fbe50ed43e
repoDigests:
- registry.k8s.io/pause@sha256:84805ddcaaae94434d8eacb7e843f549ec1da0cd277787b97ad9d9ac2cea929e
repoTags:
- registry.k8s.io/pause:3.1
size: "746911"
- id: 0184c1613d92931126feb4c548e5da11015513b9e4c104e7305ee8b53b50a9da
repoDigests:
- registry.k8s.io/pause@sha256:1000de19145c53d83aab989956fa8fca08dcbcc5b0208bdc193517905e6ccd04
repoTags:
- registry.k8s.io/pause:3.3
size: "686139"
- id: 5107333e08a87b836d48ff7528b1e84b9c86781cc9f1748bbc1b8c42a870d933
repoDigests:
- docker.io/library/mysql@sha256:4bc6bc963e6d8443453676cae56536f4b8156d78bae03c0145cbe47c2aad73bb
- docker.io/library/mysql@sha256:dab0a802b44617303694fb17d166501de279c3031ddeb28c56ecf7fcab5ef0da
repoTags:
- docker.io/library/mysql:5.7
size: "519571821"
- id: 3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899
repoDigests:
- registry.k8s.io/etcd@sha256:2e6b9c67730f1f1dce4c6e16d60135e00608728567f537e8ff70c244756cbb62
- registry.k8s.io/etcd@sha256:44a8e24dcbba3470ee1fee21d5e88d128c936e9b55d4bc51fbef8086f8ed123b
repoTags:
- registry.k8s.io/etcd:3.5.12-0
size: "150779692"
- id: 55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1
repoDigests:
- registry.k8s.io/kube-proxy@sha256:8c178447597867a03bbcdf0d1ce43fc8f6807ead2321bd1ec0e845a2f12dad80
- registry.k8s.io/kube-proxy@sha256:b26e535e8ee1cbd7dc5642fb61bd36e9d23f32e9242ae0010b2905656e664f65
repoTags:
- registry.k8s.io/kube-proxy:v1.30.3
size: "85953945"
- id: 350b164e7ae1dcddeffadd65c76226c9b6dc5553f5179153fb0e36b78f2a5e06
repoDigests:
- registry.k8s.io/pause@sha256:5bcb06ed43da4a16c6e6e33898eb0506e940bd66822659ecf0a898bbb0da7cb9
repoTags:
- registry.k8s.io/pause:latest
size: "247077"
- id: 9056ab77afb8e18e04303f11000a9d31b3f16b74c59475b899ae1b342d328d30
repoDigests:
- docker.io/kicbase/echo-server@sha256:d3d0b737c6413dcf7b9393d61285525048f2d10a0aae68296150078d379c30cf
repoTags:
- docker.io/kicbase/echo-server:functional-299463
size: "4943877"
- id: 3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2
repoDigests:
- registry.k8s.io/kube-scheduler@sha256:1738178fb116d10e7cde2cfc3671f5dfdad518d773677af740483f2dfe674266
- registry.k8s.io/kube-scheduler@sha256:2147ab5d2c73dd84e28332fcbee6826d1648eed30a531a52a96501b37d7ee4e4
repoTags:
- registry.k8s.io/kube-scheduler:v1.30.3
size: "63051080"

                                                
                                                
functional_test.go:268: (dbg) Stderr: out/minikube-linux-amd64 -p functional-299463 image ls --format yaml --alsologtostderr:
I0805 23:07:54.153623   27553 out.go:291] Setting OutFile to fd 1 ...
I0805 23:07:54.153872   27553 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0805 23:07:54.153879   27553 out.go:304] Setting ErrFile to fd 2...
I0805 23:07:54.153884   27553 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0805 23:07:54.154070   27553 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19373-9606/.minikube/bin
I0805 23:07:54.154602   27553 config.go:182] Loaded profile config "functional-299463": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
I0805 23:07:54.154700   27553 config.go:182] Loaded profile config "functional-299463": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
I0805 23:07:54.155027   27553 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I0805 23:07:54.155087   27553 main.go:141] libmachine: Launching plugin server for driver kvm2
I0805 23:07:54.170069   27553 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43853
I0805 23:07:54.170560   27553 main.go:141] libmachine: () Calling .GetVersion
I0805 23:07:54.171163   27553 main.go:141] libmachine: Using API Version  1
I0805 23:07:54.171190   27553 main.go:141] libmachine: () Calling .SetConfigRaw
I0805 23:07:54.171551   27553 main.go:141] libmachine: () Calling .GetMachineName
I0805 23:07:54.171744   27553 main.go:141] libmachine: (functional-299463) Calling .GetState
I0805 23:07:54.173533   27553 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I0805 23:07:54.173579   27553 main.go:141] libmachine: Launching plugin server for driver kvm2
I0805 23:07:54.188142   27553 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34735
I0805 23:07:54.188581   27553 main.go:141] libmachine: () Calling .GetVersion
I0805 23:07:54.189079   27553 main.go:141] libmachine: Using API Version  1
I0805 23:07:54.189105   27553 main.go:141] libmachine: () Calling .SetConfigRaw
I0805 23:07:54.189413   27553 main.go:141] libmachine: () Calling .GetMachineName
I0805 23:07:54.189624   27553 main.go:141] libmachine: (functional-299463) Calling .DriverName
I0805 23:07:54.189820   27553 ssh_runner.go:195] Run: systemctl --version
I0805 23:07:54.189843   27553 main.go:141] libmachine: (functional-299463) Calling .GetSSHHostname
I0805 23:07:54.192975   27553 main.go:141] libmachine: (functional-299463) DBG | domain functional-299463 has defined MAC address 52:54:00:bb:f2:19 in network mk-functional-299463
I0805 23:07:54.193395   27553 main.go:141] libmachine: (functional-299463) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:bb:f2:19", ip: ""} in network mk-functional-299463: {Iface:virbr1 ExpiryTime:2024-08-06 00:03:29 +0000 UTC Type:0 Mac:52:54:00:bb:f2:19 Iaid: IPaddr:192.168.39.190 Prefix:24 Hostname:functional-299463 Clientid:01:52:54:00:bb:f2:19}
I0805 23:07:54.193421   27553 main.go:141] libmachine: (functional-299463) DBG | domain functional-299463 has defined IP address 192.168.39.190 and MAC address 52:54:00:bb:f2:19 in network mk-functional-299463
I0805 23:07:54.193584   27553 main.go:141] libmachine: (functional-299463) Calling .GetSSHPort
I0805 23:07:54.193749   27553 main.go:141] libmachine: (functional-299463) Calling .GetSSHKeyPath
I0805 23:07:54.193908   27553 main.go:141] libmachine: (functional-299463) Calling .GetSSHUsername
I0805 23:07:54.194045   27553 sshutil.go:53] new ssh client: &{IP:192.168.39.190 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19373-9606/.minikube/machines/functional-299463/id_rsa Username:docker}
I0805 23:07:54.274156   27553 ssh_runner.go:195] Run: sudo crictl images --output json
I0805 23:07:54.310608   27553 main.go:141] libmachine: Making call to close driver server
I0805 23:07:54.310620   27553 main.go:141] libmachine: (functional-299463) Calling .Close
I0805 23:07:54.310948   27553 main.go:141] libmachine: Successfully made call to close driver server
I0805 23:07:54.310969   27553 main.go:141] libmachine: (functional-299463) DBG | Closing plugin on server side
I0805 23:07:54.310974   27553 main.go:141] libmachine: Making call to close connection to plugin binary
I0805 23:07:54.310985   27553 main.go:141] libmachine: Making call to close driver server
I0805 23:07:54.310993   27553 main.go:141] libmachine: (functional-299463) Calling .Close
I0805 23:07:54.311208   27553 main.go:141] libmachine: Successfully made call to close driver server
I0805 23:07:54.311221   27553 main.go:141] libmachine: Making call to close connection to plugin binary
I0805 23:07:54.311239   27553 main.go:141] libmachine: (functional-299463) DBG | Closing plugin on server side
--- PASS: TestFunctional/parallel/ImageCommands/ImageListYaml (0.20s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageBuild (3.56s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageBuild
=== PAUSE TestFunctional/parallel/ImageCommands/ImageBuild

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageBuild
functional_test.go:307: (dbg) Run:  out/minikube-linux-amd64 -p functional-299463 ssh pgrep buildkitd
functional_test.go:307: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-299463 ssh pgrep buildkitd: exit status 1 (182.864071ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:314: (dbg) Run:  out/minikube-linux-amd64 -p functional-299463 image build -t localhost/my-image:functional-299463 testdata/build --alsologtostderr
functional_test.go:314: (dbg) Done: out/minikube-linux-amd64 -p functional-299463 image build -t localhost/my-image:functional-299463 testdata/build --alsologtostderr: (3.147692778s)
functional_test.go:319: (dbg) Stdout: out/minikube-linux-amd64 -p functional-299463 image build -t localhost/my-image:functional-299463 testdata/build --alsologtostderr:
STEP 1/3: FROM gcr.io/k8s-minikube/busybox
STEP 2/3: RUN true
--> 7ceffab290b
STEP 3/3: ADD content.txt /
COMMIT localhost/my-image:functional-299463
--> 75d5e266498
Successfully tagged localhost/my-image:functional-299463
75d5e26649824dc6d5fd049af549bf09b614fae2b62d99e4bc24d7d78cfdc288
functional_test.go:322: (dbg) Stderr: out/minikube-linux-amd64 -p functional-299463 image build -t localhost/my-image:functional-299463 testdata/build --alsologtostderr:
I0805 23:07:54.538553   27607 out.go:291] Setting OutFile to fd 1 ...
I0805 23:07:54.538697   27607 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0805 23:07:54.538706   27607 out.go:304] Setting ErrFile to fd 2...
I0805 23:07:54.538711   27607 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0805 23:07:54.538908   27607 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19373-9606/.minikube/bin
I0805 23:07:54.539491   27607 config.go:182] Loaded profile config "functional-299463": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
I0805 23:07:54.539971   27607 config.go:182] Loaded profile config "functional-299463": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
I0805 23:07:54.540349   27607 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I0805 23:07:54.540388   27607 main.go:141] libmachine: Launching plugin server for driver kvm2
I0805 23:07:54.556231   27607 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38061
I0805 23:07:54.556630   27607 main.go:141] libmachine: () Calling .GetVersion
I0805 23:07:54.557199   27607 main.go:141] libmachine: Using API Version  1
I0805 23:07:54.557219   27607 main.go:141] libmachine: () Calling .SetConfigRaw
I0805 23:07:54.557565   27607 main.go:141] libmachine: () Calling .GetMachineName
I0805 23:07:54.557782   27607 main.go:141] libmachine: (functional-299463) Calling .GetState
I0805 23:07:54.559577   27607 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I0805 23:07:54.559609   27607 main.go:141] libmachine: Launching plugin server for driver kvm2
I0805 23:07:54.575241   27607 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40319
I0805 23:07:54.575683   27607 main.go:141] libmachine: () Calling .GetVersion
I0805 23:07:54.576160   27607 main.go:141] libmachine: Using API Version  1
I0805 23:07:54.576180   27607 main.go:141] libmachine: () Calling .SetConfigRaw
I0805 23:07:54.576478   27607 main.go:141] libmachine: () Calling .GetMachineName
I0805 23:07:54.576661   27607 main.go:141] libmachine: (functional-299463) Calling .DriverName
I0805 23:07:54.576904   27607 ssh_runner.go:195] Run: systemctl --version
I0805 23:07:54.576936   27607 main.go:141] libmachine: (functional-299463) Calling .GetSSHHostname
I0805 23:07:54.579499   27607 main.go:141] libmachine: (functional-299463) DBG | domain functional-299463 has defined MAC address 52:54:00:bb:f2:19 in network mk-functional-299463
I0805 23:07:54.579905   27607 main.go:141] libmachine: (functional-299463) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:bb:f2:19", ip: ""} in network mk-functional-299463: {Iface:virbr1 ExpiryTime:2024-08-06 00:03:29 +0000 UTC Type:0 Mac:52:54:00:bb:f2:19 Iaid: IPaddr:192.168.39.190 Prefix:24 Hostname:functional-299463 Clientid:01:52:54:00:bb:f2:19}
I0805 23:07:54.580009   27607 main.go:141] libmachine: (functional-299463) DBG | domain functional-299463 has defined IP address 192.168.39.190 and MAC address 52:54:00:bb:f2:19 in network mk-functional-299463
I0805 23:07:54.580036   27607 main.go:141] libmachine: (functional-299463) Calling .GetSSHPort
I0805 23:07:54.580206   27607 main.go:141] libmachine: (functional-299463) Calling .GetSSHKeyPath
I0805 23:07:54.580354   27607 main.go:141] libmachine: (functional-299463) Calling .GetSSHUsername
I0805 23:07:54.580501   27607 sshutil.go:53] new ssh client: &{IP:192.168.39.190 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19373-9606/.minikube/machines/functional-299463/id_rsa Username:docker}
I0805 23:07:54.665173   27607 build_images.go:161] Building image from path: /tmp/build.563546829.tar
I0805 23:07:54.665243   27607 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build
I0805 23:07:54.675891   27607 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/build/build.563546829.tar
I0805 23:07:54.681193   27607 ssh_runner.go:352] existence check for /var/lib/minikube/build/build.563546829.tar: stat -c "%s %y" /var/lib/minikube/build/build.563546829.tar: Process exited with status 1
stdout:

                                                
                                                
stderr:
stat: cannot statx '/var/lib/minikube/build/build.563546829.tar': No such file or directory
I0805 23:07:54.681224   27607 ssh_runner.go:362] scp /tmp/build.563546829.tar --> /var/lib/minikube/build/build.563546829.tar (3072 bytes)
I0805 23:07:54.710144   27607 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build/build.563546829
I0805 23:07:54.721968   27607 ssh_runner.go:195] Run: sudo tar -C /var/lib/minikube/build/build.563546829 -xf /var/lib/minikube/build/build.563546829.tar
I0805 23:07:54.741439   27607 crio.go:315] Building image: /var/lib/minikube/build/build.563546829
I0805 23:07:54.741510   27607 ssh_runner.go:195] Run: sudo podman build -t localhost/my-image:functional-299463 /var/lib/minikube/build/build.563546829 --cgroup-manager=cgroupfs
Trying to pull gcr.io/k8s-minikube/busybox:latest...
Getting image source signatures
Copying blob sha256:5cc84ad355aaa64f46ea9c7bbcc319a9d808ab15088a27209c9e70ef86e5a2aa
Copying blob sha256:5cc84ad355aaa64f46ea9c7bbcc319a9d808ab15088a27209c9e70ef86e5a2aa
Copying config sha256:beae173ccac6ad749f76713cf4440fe3d21d1043fe616dfbe30775815d1d0f6a
Writing manifest to image destination
Storing signatures
I0805 23:07:57.618868   27607 ssh_runner.go:235] Completed: sudo podman build -t localhost/my-image:functional-299463 /var/lib/minikube/build/build.563546829 --cgroup-manager=cgroupfs: (2.877338033s)
I0805 23:07:57.618936   27607 ssh_runner.go:195] Run: sudo rm -rf /var/lib/minikube/build/build.563546829
I0805 23:07:57.630668   27607 ssh_runner.go:195] Run: sudo rm -f /var/lib/minikube/build/build.563546829.tar
I0805 23:07:57.641096   27607 build_images.go:217] Built localhost/my-image:functional-299463 from /tmp/build.563546829.tar
I0805 23:07:57.641136   27607 build_images.go:133] succeeded building to: functional-299463
I0805 23:07:57.641142   27607 build_images.go:134] failed building to: 
I0805 23:07:57.641206   27607 main.go:141] libmachine: Making call to close driver server
I0805 23:07:57.641223   27607 main.go:141] libmachine: (functional-299463) Calling .Close
I0805 23:07:57.641488   27607 main.go:141] libmachine: Successfully made call to close driver server
I0805 23:07:57.641512   27607 main.go:141] libmachine: Making call to close connection to plugin binary
I0805 23:07:57.641521   27607 main.go:141] libmachine: Making call to close driver server
I0805 23:07:57.641529   27607 main.go:141] libmachine: (functional-299463) Calling .Close
I0805 23:07:57.641740   27607 main.go:141] libmachine: Successfully made call to close driver server
I0805 23:07:57.641763   27607 main.go:141] libmachine: Making call to close connection to plugin binary
I0805 23:07:57.641781   27607 main.go:141] libmachine: (functional-299463) DBG | Closing plugin on server side
functional_test.go:447: (dbg) Run:  out/minikube-linux-amd64 -p functional-299463 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageBuild (3.56s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/Setup (1.92s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/Setup
functional_test.go:341: (dbg) Run:  docker pull docker.io/kicbase/echo-server:1.0
functional_test.go:341: (dbg) Done: docker pull docker.io/kicbase/echo-server:1.0: (1.901366638s)
functional_test.go:346: (dbg) Run:  docker tag docker.io/kicbase/echo-server:1.0 docker.io/kicbase/echo-server:functional-299463
--- PASS: TestFunctional/parallel/ImageCommands/Setup (1.92s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadDaemon (1.26s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadDaemon
functional_test.go:354: (dbg) Run:  out/minikube-linux-amd64 -p functional-299463 image load --daemon docker.io/kicbase/echo-server:functional-299463 --alsologtostderr
functional_test.go:354: (dbg) Done: out/minikube-linux-amd64 -p functional-299463 image load --daemon docker.io/kicbase/echo-server:functional-299463 --alsologtostderr: (1.055872935s)
functional_test.go:447: (dbg) Run:  out/minikube-linux-amd64 -p functional-299463 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageLoadDaemon (1.26s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageReloadDaemon (0.89s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageReloadDaemon
functional_test.go:364: (dbg) Run:  out/minikube-linux-amd64 -p functional-299463 image load --daemon docker.io/kicbase/echo-server:functional-299463 --alsologtostderr
functional_test.go:447: (dbg) Run:  out/minikube-linux-amd64 -p functional-299463 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageReloadDaemon (0.89s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (1.74s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon
functional_test.go:234: (dbg) Run:  docker pull docker.io/kicbase/echo-server:latest
functional_test.go:239: (dbg) Run:  docker tag docker.io/kicbase/echo-server:latest docker.io/kicbase/echo-server:functional-299463
functional_test.go:244: (dbg) Run:  out/minikube-linux-amd64 -p functional-299463 image load --daemon docker.io/kicbase/echo-server:functional-299463 --alsologtostderr
functional_test.go:447: (dbg) Run:  out/minikube-linux-amd64 -p functional-299463 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (1.74s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveToFile (0.5s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveToFile
functional_test.go:379: (dbg) Run:  out/minikube-linux-amd64 -p functional-299463 image save docker.io/kicbase/echo-server:functional-299463 /home/jenkins/workspace/KVM_Linux_crio_integration/echo-server-save.tar --alsologtostderr
--- PASS: TestFunctional/parallel/ImageCommands/ImageSaveToFile (0.50s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageRemove (0.45s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageRemove
functional_test.go:391: (dbg) Run:  out/minikube-linux-amd64 -p functional-299463 image rm docker.io/kicbase/echo-server:functional-299463 --alsologtostderr
functional_test.go:447: (dbg) Run:  out/minikube-linux-amd64 -p functional-299463 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageRemove (0.45s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadFromFile (0.74s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadFromFile
functional_test.go:408: (dbg) Run:  out/minikube-linux-amd64 -p functional-299463 image load /home/jenkins/workspace/KVM_Linux_crio_integration/echo-server-save.tar --alsologtostderr
functional_test.go:447: (dbg) Run:  out/minikube-linux-amd64 -p functional-299463 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageLoadFromFile (0.74s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveDaemon (0.57s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveDaemon
functional_test.go:418: (dbg) Run:  docker rmi docker.io/kicbase/echo-server:functional-299463
functional_test.go:423: (dbg) Run:  out/minikube-linux-amd64 -p functional-299463 image save --daemon docker.io/kicbase/echo-server:functional-299463 --alsologtostderr
functional_test.go:428: (dbg) Run:  docker image inspect docker.io/kicbase/echo-server:functional-299463
--- PASS: TestFunctional/parallel/ImageCommands/ImageSaveDaemon (0.57s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_not_create (0.27s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_not_create
functional_test.go:1266: (dbg) Run:  out/minikube-linux-amd64 profile lis
functional_test.go:1271: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestFunctional/parallel/ProfileCmd/profile_not_create (0.27s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_list (0.25s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_list
functional_test.go:1306: (dbg) Run:  out/minikube-linux-amd64 profile list
functional_test.go:1311: Took "210.586282ms" to run "out/minikube-linux-amd64 profile list"
functional_test.go:1320: (dbg) Run:  out/minikube-linux-amd64 profile list -l
functional_test.go:1325: Took "43.525824ms" to run "out/minikube-linux-amd64 profile list -l"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_list (0.25s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_json_output (0.24s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_json_output
functional_test.go:1357: (dbg) Run:  out/minikube-linux-amd64 profile list -o json
functional_test.go:1362: Took "199.799327ms" to run "out/minikube-linux-amd64 profile list -o json"
functional_test.go:1370: (dbg) Run:  out/minikube-linux-amd64 profile list -o json --light
functional_test.go:1375: Took "42.99929ms" to run "out/minikube-linux-amd64 profile list -o json --light"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_json_output (0.24s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/any-port (8.4s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/any-port
functional_test_mount_test.go:73: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-299463 /tmp/TestFunctionalparallelMountCmdany-port2792521495/001:/mount-9p --alsologtostderr -v=1]
functional_test_mount_test.go:107: wrote "test-1722899268377323025" to /tmp/TestFunctionalparallelMountCmdany-port2792521495/001/created-by-test
functional_test_mount_test.go:107: wrote "test-1722899268377323025" to /tmp/TestFunctionalparallelMountCmdany-port2792521495/001/created-by-test-removed-by-pod
functional_test_mount_test.go:107: wrote "test-1722899268377323025" to /tmp/TestFunctionalparallelMountCmdany-port2792521495/001/test-1722899268377323025
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-linux-amd64 -p functional-299463 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:115: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-299463 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (181.244394ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-linux-amd64 -p functional-299463 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:129: (dbg) Run:  out/minikube-linux-amd64 -p functional-299463 ssh -- ls -la /mount-9p
functional_test_mount_test.go:133: guest mount directory contents
total 2
-rw-r--r-- 1 docker docker 24 Aug  5 23:07 created-by-test
-rw-r--r-- 1 docker docker 24 Aug  5 23:07 created-by-test-removed-by-pod
-rw-r--r-- 1 docker docker 24 Aug  5 23:07 test-1722899268377323025
functional_test_mount_test.go:137: (dbg) Run:  out/minikube-linux-amd64 -p functional-299463 ssh cat /mount-9p/test-1722899268377323025
functional_test_mount_test.go:148: (dbg) Run:  kubectl --context functional-299463 replace --force -f testdata/busybox-mount-test.yaml
functional_test_mount_test.go:153: (dbg) TestFunctional/parallel/MountCmd/any-port: waiting 4m0s for pods matching "integration-test=busybox-mount" in namespace "default" ...
helpers_test.go:344: "busybox-mount" [e36767a8-7330-4158-842f-8949ab1a0d95] Pending
helpers_test.go:344: "busybox-mount" [e36767a8-7330-4158-842f-8949ab1a0d95] Pending / Ready:ContainersNotReady (containers with unready status: [mount-munger]) / ContainersReady:ContainersNotReady (containers with unready status: [mount-munger])
helpers_test.go:344: "busybox-mount" [e36767a8-7330-4158-842f-8949ab1a0d95] Running
helpers_test.go:344: "busybox-mount" [e36767a8-7330-4158-842f-8949ab1a0d95] Running / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
helpers_test.go:344: "busybox-mount" [e36767a8-7330-4158-842f-8949ab1a0d95] Succeeded / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
functional_test_mount_test.go:153: (dbg) TestFunctional/parallel/MountCmd/any-port: integration-test=busybox-mount healthy within 6.004014106s
functional_test_mount_test.go:169: (dbg) Run:  kubectl --context functional-299463 logs busybox-mount
functional_test_mount_test.go:181: (dbg) Run:  out/minikube-linux-amd64 -p functional-299463 ssh stat /mount-9p/created-by-test
functional_test_mount_test.go:181: (dbg) Run:  out/minikube-linux-amd64 -p functional-299463 ssh stat /mount-9p/created-by-pod
functional_test_mount_test.go:90: (dbg) Run:  out/minikube-linux-amd64 -p functional-299463 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:94: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-299463 /tmp/TestFunctionalparallelMountCmdany-port2792521495/001:/mount-9p --alsologtostderr -v=1] ...
--- PASS: TestFunctional/parallel/MountCmd/any-port (8.40s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/List (0.42s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/List
functional_test.go:1455: (dbg) Run:  out/minikube-linux-amd64 -p functional-299463 service list
--- PASS: TestFunctional/parallel/ServiceCmd/List (0.42s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/JSONOutput (0.43s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/JSONOutput
functional_test.go:1485: (dbg) Run:  out/minikube-linux-amd64 -p functional-299463 service list -o json
functional_test.go:1490: Took "432.982789ms" to run "out/minikube-linux-amd64 -p functional-299463 service list -o json"
--- PASS: TestFunctional/parallel/ServiceCmd/JSONOutput (0.43s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/HTTPS (0.31s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/HTTPS
functional_test.go:1505: (dbg) Run:  out/minikube-linux-amd64 -p functional-299463 service --namespace=default --https --url hello-node
functional_test.go:1518: found endpoint: https://192.168.39.190:31054
--- PASS: TestFunctional/parallel/ServiceCmd/HTTPS (0.31s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/Format (0.3s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/Format
functional_test.go:1536: (dbg) Run:  out/minikube-linux-amd64 -p functional-299463 service hello-node --url --format={{.IP}}
--- PASS: TestFunctional/parallel/ServiceCmd/Format (0.30s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/URL (0.33s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/URL
functional_test.go:1555: (dbg) Run:  out/minikube-linux-amd64 -p functional-299463 service hello-node --url
functional_test.go:1561: found endpoint for hello-node: http://192.168.39.190:31054
--- PASS: TestFunctional/parallel/ServiceCmd/URL (0.33s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/specific-port (1.75s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/specific-port
functional_test_mount_test.go:213: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-299463 /tmp/TestFunctionalparallelMountCmdspecific-port1026513075/001:/mount-9p --alsologtostderr -v=1 --port 46464]
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-linux-amd64 -p functional-299463 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:243: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-299463 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (216.864079ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-linux-amd64 -p functional-299463 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:257: (dbg) Run:  out/minikube-linux-amd64 -p functional-299463 ssh -- ls -la /mount-9p
functional_test_mount_test.go:261: guest mount directory contents
total 0
functional_test_mount_test.go:263: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-299463 /tmp/TestFunctionalparallelMountCmdspecific-port1026513075/001:/mount-9p --alsologtostderr -v=1 --port 46464] ...
functional_test_mount_test.go:264: reading mount text
functional_test_mount_test.go:278: done reading mount text
functional_test_mount_test.go:230: (dbg) Run:  out/minikube-linux-amd64 -p functional-299463 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:230: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-299463 ssh "sudo umount -f /mount-9p": exit status 1 (232.688508ms)

                                                
                                                
-- stdout --
	umount: /mount-9p: not mounted.

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 32

                                                
                                                
** /stderr **
functional_test_mount_test.go:232: "out/minikube-linux-amd64 -p functional-299463 ssh \"sudo umount -f /mount-9p\"": exit status 1
functional_test_mount_test.go:234: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-299463 /tmp/TestFunctionalparallelMountCmdspecific-port1026513075/001:/mount-9p --alsologtostderr -v=1 --port 46464] ...
--- PASS: TestFunctional/parallel/MountCmd/specific-port (1.75s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/VerifyCleanup (1.5s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/VerifyCleanup
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-299463 /tmp/TestFunctionalparallelMountCmdVerifyCleanup78063072/001:/mount1 --alsologtostderr -v=1]
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-299463 /tmp/TestFunctionalparallelMountCmdVerifyCleanup78063072/001:/mount2 --alsologtostderr -v=1]
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-299463 /tmp/TestFunctionalparallelMountCmdVerifyCleanup78063072/001:/mount3 --alsologtostderr -v=1]
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-299463 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-299463 ssh "findmnt -T" /mount1: exit status 1 (259.570303ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-299463 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-299463 ssh "findmnt -T" /mount2
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-299463 ssh "findmnt -T" /mount3
functional_test_mount_test.go:370: (dbg) Run:  out/minikube-linux-amd64 mount -p functional-299463 --kill=true
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-299463 /tmp/TestFunctionalparallelMountCmdVerifyCleanup78063072/001:/mount1 --alsologtostderr -v=1] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-299463 /tmp/TestFunctionalparallelMountCmdVerifyCleanup78063072/001:/mount2 --alsologtostderr -v=1] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-299463 /tmp/TestFunctionalparallelMountCmdVerifyCleanup78063072/001:/mount3 --alsologtostderr -v=1] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
2024/08/05 23:08:03 [DEBUG] GET http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/
--- PASS: TestFunctional/parallel/MountCmd/VerifyCleanup (1.50s)

                                                
                                    
x
+
TestFunctional/delete_echo-server_images (0.04s)

                                                
                                                
=== RUN   TestFunctional/delete_echo-server_images
functional_test.go:189: (dbg) Run:  docker rmi -f docker.io/kicbase/echo-server:1.0
functional_test.go:189: (dbg) Run:  docker rmi -f docker.io/kicbase/echo-server:functional-299463
--- PASS: TestFunctional/delete_echo-server_images (0.04s)

                                                
                                    
x
+
TestFunctional/delete_my-image_image (0.02s)

                                                
                                                
=== RUN   TestFunctional/delete_my-image_image
functional_test.go:197: (dbg) Run:  docker rmi -f localhost/my-image:functional-299463
--- PASS: TestFunctional/delete_my-image_image (0.02s)

                                                
                                    
x
+
TestFunctional/delete_minikube_cached_images (0.02s)

                                                
                                                
=== RUN   TestFunctional/delete_minikube_cached_images
functional_test.go:205: (dbg) Run:  docker rmi -f minikube-local-cache-test:functional-299463
--- PASS: TestFunctional/delete_minikube_cached_images (0.02s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StartCluster (215.14s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StartCluster
ha_test.go:101: (dbg) Run:  out/minikube-linux-amd64 start -p ha-044175 --wait=true --memory=2200 --ha -v=7 --alsologtostderr --driver=kvm2  --container-runtime=crio
E0805 23:11:49.982115   16792 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19373-9606/.minikube/profiles/functional-299463/client.crt: no such file or directory
E0805 23:11:49.987399   16792 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19373-9606/.minikube/profiles/functional-299463/client.crt: no such file or directory
E0805 23:11:49.997659   16792 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19373-9606/.minikube/profiles/functional-299463/client.crt: no such file or directory
E0805 23:11:50.018036   16792 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19373-9606/.minikube/profiles/functional-299463/client.crt: no such file or directory
E0805 23:11:50.058378   16792 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19373-9606/.minikube/profiles/functional-299463/client.crt: no such file or directory
E0805 23:11:50.138728   16792 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19373-9606/.minikube/profiles/functional-299463/client.crt: no such file or directory
E0805 23:11:50.299124   16792 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19373-9606/.minikube/profiles/functional-299463/client.crt: no such file or directory
E0805 23:11:50.619490   16792 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19373-9606/.minikube/profiles/functional-299463/client.crt: no such file or directory
E0805 23:11:51.260297   16792 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19373-9606/.minikube/profiles/functional-299463/client.crt: no such file or directory
E0805 23:11:52.541085   16792 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19373-9606/.minikube/profiles/functional-299463/client.crt: no such file or directory
E0805 23:11:55.101653   16792 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19373-9606/.minikube/profiles/functional-299463/client.crt: no such file or directory
E0805 23:12:00.221895   16792 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19373-9606/.minikube/profiles/functional-299463/client.crt: no such file or directory
E0805 23:12:10.462697   16792 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19373-9606/.minikube/profiles/functional-299463/client.crt: no such file or directory
E0805 23:12:30.943370   16792 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19373-9606/.minikube/profiles/functional-299463/client.crt: no such file or directory
E0805 23:13:11.903755   16792 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19373-9606/.minikube/profiles/functional-299463/client.crt: no such file or directory
E0805 23:13:16.352298   16792 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19373-9606/.minikube/profiles/addons-435364/client.crt: no such file or directory
ha_test.go:101: (dbg) Done: out/minikube-linux-amd64 start -p ha-044175 --wait=true --memory=2200 --ha -v=7 --alsologtostderr --driver=kvm2  --container-runtime=crio: (3m34.46855459s)
ha_test.go:107: (dbg) Run:  out/minikube-linux-amd64 -p ha-044175 status -v=7 --alsologtostderr
--- PASS: TestMultiControlPlane/serial/StartCluster (215.14s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DeployApp (6.74s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DeployApp
ha_test.go:128: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-044175 -- apply -f ./testdata/ha/ha-pod-dns-test.yaml
ha_test.go:133: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-044175 -- rollout status deployment/busybox
ha_test.go:133: (dbg) Done: out/minikube-linux-amd64 kubectl -p ha-044175 -- rollout status deployment/busybox: (4.575133315s)
ha_test.go:140: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-044175 -- get pods -o jsonpath='{.items[*].status.podIP}'
ha_test.go:163: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-044175 -- get pods -o jsonpath='{.items[*].metadata.name}'
ha_test.go:171: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-044175 -- exec busybox-fc5497c4f-fqp2t -- nslookup kubernetes.io
ha_test.go:171: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-044175 -- exec busybox-fc5497c4f-tpqpw -- nslookup kubernetes.io
ha_test.go:171: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-044175 -- exec busybox-fc5497c4f-wmfql -- nslookup kubernetes.io
ha_test.go:181: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-044175 -- exec busybox-fc5497c4f-fqp2t -- nslookup kubernetes.default
ha_test.go:181: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-044175 -- exec busybox-fc5497c4f-tpqpw -- nslookup kubernetes.default
ha_test.go:181: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-044175 -- exec busybox-fc5497c4f-wmfql -- nslookup kubernetes.default
ha_test.go:189: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-044175 -- exec busybox-fc5497c4f-fqp2t -- nslookup kubernetes.default.svc.cluster.local
ha_test.go:189: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-044175 -- exec busybox-fc5497c4f-tpqpw -- nslookup kubernetes.default.svc.cluster.local
ha_test.go:189: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-044175 -- exec busybox-fc5497c4f-wmfql -- nslookup kubernetes.default.svc.cluster.local
--- PASS: TestMultiControlPlane/serial/DeployApp (6.74s)

                                                
                                    
x
+
TestMultiControlPlane/serial/PingHostFromPods (1.21s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/PingHostFromPods
ha_test.go:199: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-044175 -- get pods -o jsonpath='{.items[*].metadata.name}'
ha_test.go:207: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-044175 -- exec busybox-fc5497c4f-fqp2t -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-044175 -- exec busybox-fc5497c4f-fqp2t -- sh -c "ping -c 1 192.168.39.1"
ha_test.go:207: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-044175 -- exec busybox-fc5497c4f-tpqpw -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-044175 -- exec busybox-fc5497c4f-tpqpw -- sh -c "ping -c 1 192.168.39.1"
ha_test.go:207: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-044175 -- exec busybox-fc5497c4f-wmfql -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-044175 -- exec busybox-fc5497c4f-wmfql -- sh -c "ping -c 1 192.168.39.1"
--- PASS: TestMultiControlPlane/serial/PingHostFromPods (1.21s)

                                                
                                    
x
+
TestMultiControlPlane/serial/AddWorkerNode (57.92s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/AddWorkerNode
ha_test.go:228: (dbg) Run:  out/minikube-linux-amd64 node add -p ha-044175 -v=7 --alsologtostderr
E0805 23:14:33.823990   16792 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19373-9606/.minikube/profiles/functional-299463/client.crt: no such file or directory
ha_test.go:228: (dbg) Done: out/minikube-linux-amd64 node add -p ha-044175 -v=7 --alsologtostderr: (57.089053485s)
ha_test.go:234: (dbg) Run:  out/minikube-linux-amd64 -p ha-044175 status -v=7 --alsologtostderr
--- PASS: TestMultiControlPlane/serial/AddWorkerNode (57.92s)

                                                
                                    
x
+
TestMultiControlPlane/serial/NodeLabels (0.07s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/NodeLabels
ha_test.go:255: (dbg) Run:  kubectl --context ha-044175 get nodes -o "jsonpath=[{range .items[*]}{.metadata.labels},{end}]"
--- PASS: TestMultiControlPlane/serial/NodeLabels (0.07s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterClusterStart (0.53s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterClusterStart
ha_test.go:281: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/HAppyAfterClusterStart (0.53s)

                                                
                                    
x
+
TestMultiControlPlane/serial/CopyFile (12.81s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/CopyFile
ha_test.go:326: (dbg) Run:  out/minikube-linux-amd64 -p ha-044175 status --output json -v=7 --alsologtostderr
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-044175 cp testdata/cp-test.txt ha-044175:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-044175 ssh -n ha-044175 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-044175 cp ha-044175:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile3481107746/001/cp-test_ha-044175.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-044175 ssh -n ha-044175 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-044175 cp ha-044175:/home/docker/cp-test.txt ha-044175-m02:/home/docker/cp-test_ha-044175_ha-044175-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-044175 ssh -n ha-044175 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-044175 ssh -n ha-044175-m02 "sudo cat /home/docker/cp-test_ha-044175_ha-044175-m02.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-044175 cp ha-044175:/home/docker/cp-test.txt ha-044175-m03:/home/docker/cp-test_ha-044175_ha-044175-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-044175 ssh -n ha-044175 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-044175 ssh -n ha-044175-m03 "sudo cat /home/docker/cp-test_ha-044175_ha-044175-m03.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-044175 cp ha-044175:/home/docker/cp-test.txt ha-044175-m04:/home/docker/cp-test_ha-044175_ha-044175-m04.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-044175 ssh -n ha-044175 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-044175 ssh -n ha-044175-m04 "sudo cat /home/docker/cp-test_ha-044175_ha-044175-m04.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-044175 cp testdata/cp-test.txt ha-044175-m02:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-044175 ssh -n ha-044175-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-044175 cp ha-044175-m02:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile3481107746/001/cp-test_ha-044175-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-044175 ssh -n ha-044175-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-044175 cp ha-044175-m02:/home/docker/cp-test.txt ha-044175:/home/docker/cp-test_ha-044175-m02_ha-044175.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-044175 ssh -n ha-044175-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-044175 ssh -n ha-044175 "sudo cat /home/docker/cp-test_ha-044175-m02_ha-044175.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-044175 cp ha-044175-m02:/home/docker/cp-test.txt ha-044175-m03:/home/docker/cp-test_ha-044175-m02_ha-044175-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-044175 ssh -n ha-044175-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-044175 ssh -n ha-044175-m03 "sudo cat /home/docker/cp-test_ha-044175-m02_ha-044175-m03.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-044175 cp ha-044175-m02:/home/docker/cp-test.txt ha-044175-m04:/home/docker/cp-test_ha-044175-m02_ha-044175-m04.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-044175 ssh -n ha-044175-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-044175 ssh -n ha-044175-m04 "sudo cat /home/docker/cp-test_ha-044175-m02_ha-044175-m04.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-044175 cp testdata/cp-test.txt ha-044175-m03:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-044175 ssh -n ha-044175-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-044175 cp ha-044175-m03:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile3481107746/001/cp-test_ha-044175-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-044175 ssh -n ha-044175-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-044175 cp ha-044175-m03:/home/docker/cp-test.txt ha-044175:/home/docker/cp-test_ha-044175-m03_ha-044175.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-044175 ssh -n ha-044175-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-044175 ssh -n ha-044175 "sudo cat /home/docker/cp-test_ha-044175-m03_ha-044175.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-044175 cp ha-044175-m03:/home/docker/cp-test.txt ha-044175-m02:/home/docker/cp-test_ha-044175-m03_ha-044175-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-044175 ssh -n ha-044175-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-044175 ssh -n ha-044175-m02 "sudo cat /home/docker/cp-test_ha-044175-m03_ha-044175-m02.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-044175 cp ha-044175-m03:/home/docker/cp-test.txt ha-044175-m04:/home/docker/cp-test_ha-044175-m03_ha-044175-m04.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-044175 ssh -n ha-044175-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-044175 ssh -n ha-044175-m04 "sudo cat /home/docker/cp-test_ha-044175-m03_ha-044175-m04.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-044175 cp testdata/cp-test.txt ha-044175-m04:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-044175 ssh -n ha-044175-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-044175 cp ha-044175-m04:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile3481107746/001/cp-test_ha-044175-m04.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-044175 ssh -n ha-044175-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-044175 cp ha-044175-m04:/home/docker/cp-test.txt ha-044175:/home/docker/cp-test_ha-044175-m04_ha-044175.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-044175 ssh -n ha-044175-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-044175 ssh -n ha-044175 "sudo cat /home/docker/cp-test_ha-044175-m04_ha-044175.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-044175 cp ha-044175-m04:/home/docker/cp-test.txt ha-044175-m02:/home/docker/cp-test_ha-044175-m04_ha-044175-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-044175 ssh -n ha-044175-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-044175 ssh -n ha-044175-m02 "sudo cat /home/docker/cp-test_ha-044175-m04_ha-044175-m02.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-044175 cp ha-044175-m04:/home/docker/cp-test.txt ha-044175-m03:/home/docker/cp-test_ha-044175-m04_ha-044175-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-044175 ssh -n ha-044175-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-044175 ssh -n ha-044175-m03 "sudo cat /home/docker/cp-test_ha-044175-m04_ha-044175-m03.txt"
--- PASS: TestMultiControlPlane/serial/CopyFile (12.81s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop (3.47s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop
ha_test.go:390: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
E0805 23:17:17.664805   16792 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19373-9606/.minikube/profiles/functional-299463/client.crt: no such file or directory
ha_test.go:390: (dbg) Done: out/minikube-linux-amd64 profile list --output json: (3.468432032s)
--- PASS: TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop (3.47s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart (0.4s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart
ha_test.go:281: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart (0.40s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DeleteSecondaryNode (17.39s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DeleteSecondaryNode
ha_test.go:487: (dbg) Run:  out/minikube-linux-amd64 -p ha-044175 node delete m03 -v=7 --alsologtostderr
ha_test.go:487: (dbg) Done: out/minikube-linux-amd64 -p ha-044175 node delete m03 -v=7 --alsologtostderr: (16.644646685s)
ha_test.go:493: (dbg) Run:  out/minikube-linux-amd64 -p ha-044175 status -v=7 --alsologtostderr
ha_test.go:511: (dbg) Run:  kubectl get nodes
ha_test.go:519: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiControlPlane/serial/DeleteSecondaryNode (17.39s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete (0.37s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete
ha_test.go:390: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete (0.37s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartCluster (289.21s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartCluster
ha_test.go:560: (dbg) Run:  out/minikube-linux-amd64 start -p ha-044175 --wait=true -v=7 --alsologtostderr --driver=kvm2  --container-runtime=crio
E0805 23:28:13.025631   16792 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19373-9606/.minikube/profiles/functional-299463/client.crt: no such file or directory
E0805 23:28:16.352093   16792 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19373-9606/.minikube/profiles/addons-435364/client.crt: no such file or directory
E0805 23:31:49.981568   16792 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19373-9606/.minikube/profiles/functional-299463/client.crt: no such file or directory
ha_test.go:560: (dbg) Done: out/minikube-linux-amd64 start -p ha-044175 --wait=true -v=7 --alsologtostderr --driver=kvm2  --container-runtime=crio: (4m48.477399118s)
ha_test.go:566: (dbg) Run:  out/minikube-linux-amd64 -p ha-044175 status -v=7 --alsologtostderr
ha_test.go:584: (dbg) Run:  kubectl get nodes
ha_test.go:592: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiControlPlane/serial/RestartCluster (289.21s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterClusterRestart (0.37s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterClusterRestart
ha_test.go:390: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/DegradedAfterClusterRestart (0.37s)

                                                
                                    
x
+
TestMultiControlPlane/serial/AddSecondaryNode (78.31s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/AddSecondaryNode
ha_test.go:605: (dbg) Run:  out/minikube-linux-amd64 node add -p ha-044175 --control-plane -v=7 --alsologtostderr
E0805 23:33:16.351649   16792 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19373-9606/.minikube/profiles/addons-435364/client.crt: no such file or directory
ha_test.go:605: (dbg) Done: out/minikube-linux-amd64 node add -p ha-044175 --control-plane -v=7 --alsologtostderr: (1m17.475236367s)
ha_test.go:611: (dbg) Run:  out/minikube-linux-amd64 -p ha-044175 status -v=7 --alsologtostderr
--- PASS: TestMultiControlPlane/serial/AddSecondaryNode (78.31s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd (0.54s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd
ha_test.go:281: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd (0.54s)

                                                
                                    
x
+
TestJSONOutput/start/Command (97.54s)

                                                
                                                
=== RUN   TestJSONOutput/start/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 start -p json-output-911927 --output=json --user=testUser --memory=2200 --wait=true --driver=kvm2  --container-runtime=crio
json_output_test.go:63: (dbg) Done: out/minikube-linux-amd64 start -p json-output-911927 --output=json --user=testUser --memory=2200 --wait=true --driver=kvm2  --container-runtime=crio: (1m37.540796698s)
--- PASS: TestJSONOutput/start/Command (97.54s)

                                                
                                    
x
+
TestJSONOutput/start/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/Audit
--- PASS: TestJSONOutput/start/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/start/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/start/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/Command (0.74s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 pause -p json-output-911927 --output=json --user=testUser
--- PASS: TestJSONOutput/pause/Command (0.74s)

                                                
                                    
x
+
TestJSONOutput/pause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Audit
--- PASS: TestJSONOutput/pause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/Command (0.63s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 unpause -p json-output-911927 --output=json --user=testUser
--- PASS: TestJSONOutput/unpause/Command (0.63s)

                                                
                                    
x
+
TestJSONOutput/unpause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Audit
--- PASS: TestJSONOutput/unpause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/Command (7.38s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 stop -p json-output-911927 --output=json --user=testUser
json_output_test.go:63: (dbg) Done: out/minikube-linux-amd64 stop -p json-output-911927 --output=json --user=testUser: (7.383766858s)
--- PASS: TestJSONOutput/stop/Command (7.38s)

                                                
                                    
x
+
TestJSONOutput/stop/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Audit
--- PASS: TestJSONOutput/stop/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestErrorJSONOutput (0.19s)

                                                
                                                
=== RUN   TestErrorJSONOutput
json_output_test.go:160: (dbg) Run:  out/minikube-linux-amd64 start -p json-output-error-946164 --memory=2200 --output=json --wait=true --driver=fail
json_output_test.go:160: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p json-output-error-946164 --memory=2200 --output=json --wait=true --driver=fail: exit status 56 (62.870573ms)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"b0622d6a-dab7-4e3e-b29c-70b0082b5a89","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"[json-output-error-946164] minikube v1.33.1 on Ubuntu 20.04 (kvm/amd64)","name":"Initial Minikube Setup","totalsteps":"19"}}
	{"specversion":"1.0","id":"c4614720-5011-4121-a30b-ec2d9dc5817c","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_LOCATION=19373"}}
	{"specversion":"1.0","id":"d54d375f-8788-46a6-820c-1aba09607325","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true"}}
	{"specversion":"1.0","id":"5d38d0bf-d1f4-4050-abb7-f7e85b6d7e09","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"KUBECONFIG=/home/jenkins/minikube-integration/19373-9606/kubeconfig"}}
	{"specversion":"1.0","id":"fc8885df-c098-40d9-8637-49ad1737738b","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_HOME=/home/jenkins/minikube-integration/19373-9606/.minikube"}}
	{"specversion":"1.0","id":"f5608c7a-1513-4e8e-8458-2350830b34af","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_BIN=out/minikube-linux-amd64"}}
	{"specversion":"1.0","id":"a5a298f6-e584-4965-8350-2189fb84d5d4","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_FORCE_SYSTEMD="}}
	{"specversion":"1.0","id":"8dc582e0-0a2b-487d-be5d-614a85269565","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"","exitcode":"56","issues":"","message":"The driver 'fail' is not supported on linux/amd64","name":"DRV_UNSUPPORTED_OS","url":""}}

                                                
                                                
-- /stdout --
helpers_test.go:175: Cleaning up "json-output-error-946164" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p json-output-error-946164
--- PASS: TestErrorJSONOutput (0.19s)

                                                
                                    
x
+
TestMainNoArgs (0.04s)

                                                
                                                
=== RUN   TestMainNoArgs
main_test.go:68: (dbg) Run:  out/minikube-linux-amd64
--- PASS: TestMainNoArgs (0.04s)

                                                
                                    
x
+
TestMinikubeProfile (88.2s)

                                                
                                                
=== RUN   TestMinikubeProfile
minikube_profile_test.go:44: (dbg) Run:  out/minikube-linux-amd64 start -p first-170065 --driver=kvm2  --container-runtime=crio
E0805 23:36:19.400831   16792 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19373-9606/.minikube/profiles/addons-435364/client.crt: no such file or directory
minikube_profile_test.go:44: (dbg) Done: out/minikube-linux-amd64 start -p first-170065 --driver=kvm2  --container-runtime=crio: (43.38986201s)
minikube_profile_test.go:44: (dbg) Run:  out/minikube-linux-amd64 start -p second-172745 --driver=kvm2  --container-runtime=crio
E0805 23:36:49.981164   16792 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19373-9606/.minikube/profiles/functional-299463/client.crt: no such file or directory
minikube_profile_test.go:44: (dbg) Done: out/minikube-linux-amd64 start -p second-172745 --driver=kvm2  --container-runtime=crio: (41.960879747s)
minikube_profile_test.go:51: (dbg) Run:  out/minikube-linux-amd64 profile first-170065
minikube_profile_test.go:55: (dbg) Run:  out/minikube-linux-amd64 profile list -ojson
minikube_profile_test.go:51: (dbg) Run:  out/minikube-linux-amd64 profile second-172745
minikube_profile_test.go:55: (dbg) Run:  out/minikube-linux-amd64 profile list -ojson
helpers_test.go:175: Cleaning up "second-172745" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p second-172745
helpers_test.go:175: Cleaning up "first-170065" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p first-170065
--- PASS: TestMinikubeProfile (88.20s)

                                                
                                    
x
+
TestMountStart/serial/StartWithMountFirst (27.22s)

                                                
                                                
=== RUN   TestMountStart/serial/StartWithMountFirst
mount_start_test.go:98: (dbg) Run:  out/minikube-linux-amd64 start -p mount-start-1-020702 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=kvm2  --container-runtime=crio
mount_start_test.go:98: (dbg) Done: out/minikube-linux-amd64 start -p mount-start-1-020702 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=kvm2  --container-runtime=crio: (26.224388881s)
--- PASS: TestMountStart/serial/StartWithMountFirst (27.22s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountFirst (0.36s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountFirst
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-1-020702 ssh -- ls /minikube-host
mount_start_test.go:127: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-1-020702 ssh -- mount | grep 9p
--- PASS: TestMountStart/serial/VerifyMountFirst (0.36s)

                                                
                                    
x
+
TestMountStart/serial/StartWithMountSecond (29.88s)

                                                
                                                
=== RUN   TestMountStart/serial/StartWithMountSecond
mount_start_test.go:98: (dbg) Run:  out/minikube-linux-amd64 start -p mount-start-2-037323 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46465 --mount-uid 0 --no-kubernetes --driver=kvm2  --container-runtime=crio
E0805 23:38:16.353261   16792 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19373-9606/.minikube/profiles/addons-435364/client.crt: no such file or directory
mount_start_test.go:98: (dbg) Done: out/minikube-linux-amd64 start -p mount-start-2-037323 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46465 --mount-uid 0 --no-kubernetes --driver=kvm2  --container-runtime=crio: (28.883869026s)
--- PASS: TestMountStart/serial/StartWithMountSecond (29.88s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountSecond (0.38s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountSecond
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-037323 ssh -- ls /minikube-host
mount_start_test.go:127: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-037323 ssh -- mount | grep 9p
--- PASS: TestMountStart/serial/VerifyMountSecond (0.38s)

                                                
                                    
x
+
TestMountStart/serial/DeleteFirst (0.71s)

                                                
                                                
=== RUN   TestMountStart/serial/DeleteFirst
pause_test.go:132: (dbg) Run:  out/minikube-linux-amd64 delete -p mount-start-1-020702 --alsologtostderr -v=5
--- PASS: TestMountStart/serial/DeleteFirst (0.71s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountPostDelete (0.37s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountPostDelete
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-037323 ssh -- ls /minikube-host
mount_start_test.go:127: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-037323 ssh -- mount | grep 9p
--- PASS: TestMountStart/serial/VerifyMountPostDelete (0.37s)

                                                
                                    
x
+
TestMountStart/serial/Stop (1.28s)

                                                
                                                
=== RUN   TestMountStart/serial/Stop
mount_start_test.go:155: (dbg) Run:  out/minikube-linux-amd64 stop -p mount-start-2-037323
mount_start_test.go:155: (dbg) Done: out/minikube-linux-amd64 stop -p mount-start-2-037323: (1.283321169s)
--- PASS: TestMountStart/serial/Stop (1.28s)

                                                
                                    
x
+
TestMountStart/serial/RestartStopped (22.4s)

                                                
                                                
=== RUN   TestMountStart/serial/RestartStopped
mount_start_test.go:166: (dbg) Run:  out/minikube-linux-amd64 start -p mount-start-2-037323
mount_start_test.go:166: (dbg) Done: out/minikube-linux-amd64 start -p mount-start-2-037323: (21.402423245s)
--- PASS: TestMountStart/serial/RestartStopped (22.40s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountPostStop (0.37s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountPostStop
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-037323 ssh -- ls /minikube-host
mount_start_test.go:127: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-037323 ssh -- mount | grep 9p
--- PASS: TestMountStart/serial/VerifyMountPostStop (0.37s)

                                                
                                    
x
+
TestMultiNode/serial/FreshStart2Nodes (126.64s)

                                                
                                                
=== RUN   TestMultiNode/serial/FreshStart2Nodes
multinode_test.go:96: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-342677 --wait=true --memory=2200 --nodes=2 -v=8 --alsologtostderr --driver=kvm2  --container-runtime=crio
multinode_test.go:96: (dbg) Done: out/minikube-linux-amd64 start -p multinode-342677 --wait=true --memory=2200 --nodes=2 -v=8 --alsologtostderr --driver=kvm2  --container-runtime=crio: (2m6.243898431s)
multinode_test.go:102: (dbg) Run:  out/minikube-linux-amd64 -p multinode-342677 status --alsologtostderr
--- PASS: TestMultiNode/serial/FreshStart2Nodes (126.64s)

                                                
                                    
x
+
TestMultiNode/serial/DeployApp2Nodes (5.73s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeployApp2Nodes
multinode_test.go:493: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-342677 -- apply -f ./testdata/multinodes/multinode-pod-dns-test.yaml
multinode_test.go:498: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-342677 -- rollout status deployment/busybox
multinode_test.go:498: (dbg) Done: out/minikube-linux-amd64 kubectl -p multinode-342677 -- rollout status deployment/busybox: (3.997615715s)
multinode_test.go:505: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-342677 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:528: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-342677 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:536: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-342677 -- exec busybox-fc5497c4f-78mt7 -- nslookup kubernetes.io
multinode_test.go:536: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-342677 -- exec busybox-fc5497c4f-snwzb -- nslookup kubernetes.io
multinode_test.go:546: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-342677 -- exec busybox-fc5497c4f-78mt7 -- nslookup kubernetes.default
multinode_test.go:546: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-342677 -- exec busybox-fc5497c4f-snwzb -- nslookup kubernetes.default
multinode_test.go:554: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-342677 -- exec busybox-fc5497c4f-78mt7 -- nslookup kubernetes.default.svc.cluster.local
multinode_test.go:554: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-342677 -- exec busybox-fc5497c4f-snwzb -- nslookup kubernetes.default.svc.cluster.local
--- PASS: TestMultiNode/serial/DeployApp2Nodes (5.73s)

                                                
                                    
x
+
TestMultiNode/serial/PingHostFrom2Pods (0.79s)

                                                
                                                
=== RUN   TestMultiNode/serial/PingHostFrom2Pods
multinode_test.go:564: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-342677 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:572: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-342677 -- exec busybox-fc5497c4f-78mt7 -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
multinode_test.go:583: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-342677 -- exec busybox-fc5497c4f-78mt7 -- sh -c "ping -c 1 192.168.39.1"
multinode_test.go:572: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-342677 -- exec busybox-fc5497c4f-snwzb -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
multinode_test.go:583: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-342677 -- exec busybox-fc5497c4f-snwzb -- sh -c "ping -c 1 192.168.39.1"
--- PASS: TestMultiNode/serial/PingHostFrom2Pods (0.79s)

                                                
                                    
x
+
TestMultiNode/serial/AddNode (49.18s)

                                                
                                                
=== RUN   TestMultiNode/serial/AddNode
multinode_test.go:121: (dbg) Run:  out/minikube-linux-amd64 node add -p multinode-342677 -v 3 --alsologtostderr
E0805 23:41:49.982152   16792 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19373-9606/.minikube/profiles/functional-299463/client.crt: no such file or directory
multinode_test.go:121: (dbg) Done: out/minikube-linux-amd64 node add -p multinode-342677 -v 3 --alsologtostderr: (48.61800256s)
multinode_test.go:127: (dbg) Run:  out/minikube-linux-amd64 -p multinode-342677 status --alsologtostderr
--- PASS: TestMultiNode/serial/AddNode (49.18s)

                                                
                                    
x
+
TestMultiNode/serial/MultiNodeLabels (0.06s)

                                                
                                                
=== RUN   TestMultiNode/serial/MultiNodeLabels
multinode_test.go:221: (dbg) Run:  kubectl --context multinode-342677 get nodes -o "jsonpath=[{range .items[*]}{.metadata.labels},{end}]"
--- PASS: TestMultiNode/serial/MultiNodeLabels (0.06s)

                                                
                                    
x
+
TestMultiNode/serial/ProfileList (0.22s)

                                                
                                                
=== RUN   TestMultiNode/serial/ProfileList
multinode_test.go:143: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiNode/serial/ProfileList (0.22s)

                                                
                                    
x
+
TestMultiNode/serial/CopyFile (7.01s)

                                                
                                                
=== RUN   TestMultiNode/serial/CopyFile
multinode_test.go:184: (dbg) Run:  out/minikube-linux-amd64 -p multinode-342677 status --output json --alsologtostderr
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-342677 cp testdata/cp-test.txt multinode-342677:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-342677 ssh -n multinode-342677 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-342677 cp multinode-342677:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile1038504423/001/cp-test_multinode-342677.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-342677 ssh -n multinode-342677 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-342677 cp multinode-342677:/home/docker/cp-test.txt multinode-342677-m02:/home/docker/cp-test_multinode-342677_multinode-342677-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-342677 ssh -n multinode-342677 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-342677 ssh -n multinode-342677-m02 "sudo cat /home/docker/cp-test_multinode-342677_multinode-342677-m02.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-342677 cp multinode-342677:/home/docker/cp-test.txt multinode-342677-m03:/home/docker/cp-test_multinode-342677_multinode-342677-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-342677 ssh -n multinode-342677 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-342677 ssh -n multinode-342677-m03 "sudo cat /home/docker/cp-test_multinode-342677_multinode-342677-m03.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-342677 cp testdata/cp-test.txt multinode-342677-m02:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-342677 ssh -n multinode-342677-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-342677 cp multinode-342677-m02:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile1038504423/001/cp-test_multinode-342677-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-342677 ssh -n multinode-342677-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-342677 cp multinode-342677-m02:/home/docker/cp-test.txt multinode-342677:/home/docker/cp-test_multinode-342677-m02_multinode-342677.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-342677 ssh -n multinode-342677-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-342677 ssh -n multinode-342677 "sudo cat /home/docker/cp-test_multinode-342677-m02_multinode-342677.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-342677 cp multinode-342677-m02:/home/docker/cp-test.txt multinode-342677-m03:/home/docker/cp-test_multinode-342677-m02_multinode-342677-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-342677 ssh -n multinode-342677-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-342677 ssh -n multinode-342677-m03 "sudo cat /home/docker/cp-test_multinode-342677-m02_multinode-342677-m03.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-342677 cp testdata/cp-test.txt multinode-342677-m03:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-342677 ssh -n multinode-342677-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-342677 cp multinode-342677-m03:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile1038504423/001/cp-test_multinode-342677-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-342677 ssh -n multinode-342677-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-342677 cp multinode-342677-m03:/home/docker/cp-test.txt multinode-342677:/home/docker/cp-test_multinode-342677-m03_multinode-342677.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-342677 ssh -n multinode-342677-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-342677 ssh -n multinode-342677 "sudo cat /home/docker/cp-test_multinode-342677-m03_multinode-342677.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-342677 cp multinode-342677-m03:/home/docker/cp-test.txt multinode-342677-m02:/home/docker/cp-test_multinode-342677-m03_multinode-342677-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-342677 ssh -n multinode-342677-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-342677 ssh -n multinode-342677-m02 "sudo cat /home/docker/cp-test_multinode-342677-m03_multinode-342677-m02.txt"
--- PASS: TestMultiNode/serial/CopyFile (7.01s)

                                                
                                    
x
+
TestMultiNode/serial/StopNode (2.35s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopNode
multinode_test.go:248: (dbg) Run:  out/minikube-linux-amd64 -p multinode-342677 node stop m03
multinode_test.go:248: (dbg) Done: out/minikube-linux-amd64 -p multinode-342677 node stop m03: (1.496250363s)
multinode_test.go:254: (dbg) Run:  out/minikube-linux-amd64 -p multinode-342677 status
multinode_test.go:254: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-342677 status: exit status 7 (422.574588ms)

                                                
                                                
-- stdout --
	multinode-342677
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-342677-m02
	type: Worker
	host: Running
	kubelet: Running
	
	multinode-342677-m03
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
multinode_test.go:261: (dbg) Run:  out/minikube-linux-amd64 -p multinode-342677 status --alsologtostderr
multinode_test.go:261: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-342677 status --alsologtostderr: exit status 7 (426.36717ms)

                                                
                                                
-- stdout --
	multinode-342677
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-342677-m02
	type: Worker
	host: Running
	kubelet: Running
	
	multinode-342677-m03
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0805 23:42:02.657546   47040 out.go:291] Setting OutFile to fd 1 ...
	I0805 23:42:02.657807   47040 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0805 23:42:02.657816   47040 out.go:304] Setting ErrFile to fd 2...
	I0805 23:42:02.657821   47040 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0805 23:42:02.658025   47040 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19373-9606/.minikube/bin
	I0805 23:42:02.658181   47040 out.go:298] Setting JSON to false
	I0805 23:42:02.658202   47040 mustload.go:65] Loading cluster: multinode-342677
	I0805 23:42:02.658245   47040 notify.go:220] Checking for updates...
	I0805 23:42:02.658724   47040 config.go:182] Loaded profile config "multinode-342677": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0805 23:42:02.658752   47040 status.go:255] checking status of multinode-342677 ...
	I0805 23:42:02.659360   47040 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0805 23:42:02.659410   47040 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0805 23:42:02.680235   47040 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46405
	I0805 23:42:02.680621   47040 main.go:141] libmachine: () Calling .GetVersion
	I0805 23:42:02.681156   47040 main.go:141] libmachine: Using API Version  1
	I0805 23:42:02.681177   47040 main.go:141] libmachine: () Calling .SetConfigRaw
	I0805 23:42:02.681623   47040 main.go:141] libmachine: () Calling .GetMachineName
	I0805 23:42:02.681853   47040 main.go:141] libmachine: (multinode-342677) Calling .GetState
	I0805 23:42:02.683637   47040 status.go:330] multinode-342677 host status = "Running" (err=<nil>)
	I0805 23:42:02.683664   47040 host.go:66] Checking if "multinode-342677" exists ...
	I0805 23:42:02.683998   47040 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0805 23:42:02.684059   47040 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0805 23:42:02.700123   47040 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41373
	I0805 23:42:02.700521   47040 main.go:141] libmachine: () Calling .GetVersion
	I0805 23:42:02.701019   47040 main.go:141] libmachine: Using API Version  1
	I0805 23:42:02.701041   47040 main.go:141] libmachine: () Calling .SetConfigRaw
	I0805 23:42:02.701317   47040 main.go:141] libmachine: () Calling .GetMachineName
	I0805 23:42:02.701490   47040 main.go:141] libmachine: (multinode-342677) Calling .GetIP
	I0805 23:42:02.704230   47040 main.go:141] libmachine: (multinode-342677) DBG | domain multinode-342677 has defined MAC address 52:54:00:90:94:1a in network mk-multinode-342677
	I0805 23:42:02.704610   47040 main.go:141] libmachine: (multinode-342677) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:90:94:1a", ip: ""} in network mk-multinode-342677: {Iface:virbr1 ExpiryTime:2024-08-06 00:39:05 +0000 UTC Type:0 Mac:52:54:00:90:94:1a Iaid: IPaddr:192.168.39.10 Prefix:24 Hostname:multinode-342677 Clientid:01:52:54:00:90:94:1a}
	I0805 23:42:02.704649   47040 main.go:141] libmachine: (multinode-342677) DBG | domain multinode-342677 has defined IP address 192.168.39.10 and MAC address 52:54:00:90:94:1a in network mk-multinode-342677
	I0805 23:42:02.704739   47040 host.go:66] Checking if "multinode-342677" exists ...
	I0805 23:42:02.705004   47040 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0805 23:42:02.705039   47040 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0805 23:42:02.720285   47040 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37603
	I0805 23:42:02.720691   47040 main.go:141] libmachine: () Calling .GetVersion
	I0805 23:42:02.721141   47040 main.go:141] libmachine: Using API Version  1
	I0805 23:42:02.721163   47040 main.go:141] libmachine: () Calling .SetConfigRaw
	I0805 23:42:02.721504   47040 main.go:141] libmachine: () Calling .GetMachineName
	I0805 23:42:02.721689   47040 main.go:141] libmachine: (multinode-342677) Calling .DriverName
	I0805 23:42:02.721871   47040 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0805 23:42:02.721903   47040 main.go:141] libmachine: (multinode-342677) Calling .GetSSHHostname
	I0805 23:42:02.724475   47040 main.go:141] libmachine: (multinode-342677) DBG | domain multinode-342677 has defined MAC address 52:54:00:90:94:1a in network mk-multinode-342677
	I0805 23:42:02.725069   47040 main.go:141] libmachine: (multinode-342677) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:90:94:1a", ip: ""} in network mk-multinode-342677: {Iface:virbr1 ExpiryTime:2024-08-06 00:39:05 +0000 UTC Type:0 Mac:52:54:00:90:94:1a Iaid: IPaddr:192.168.39.10 Prefix:24 Hostname:multinode-342677 Clientid:01:52:54:00:90:94:1a}
	I0805 23:42:02.725089   47040 main.go:141] libmachine: (multinode-342677) DBG | domain multinode-342677 has defined IP address 192.168.39.10 and MAC address 52:54:00:90:94:1a in network mk-multinode-342677
	I0805 23:42:02.725281   47040 main.go:141] libmachine: (multinode-342677) Calling .GetSSHPort
	I0805 23:42:02.725472   47040 main.go:141] libmachine: (multinode-342677) Calling .GetSSHKeyPath
	I0805 23:42:02.725613   47040 main.go:141] libmachine: (multinode-342677) Calling .GetSSHUsername
	I0805 23:42:02.725783   47040 sshutil.go:53] new ssh client: &{IP:192.168.39.10 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19373-9606/.minikube/machines/multinode-342677/id_rsa Username:docker}
	I0805 23:42:02.816761   47040 ssh_runner.go:195] Run: systemctl --version
	I0805 23:42:02.824899   47040 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0805 23:42:02.841769   47040 kubeconfig.go:125] found "multinode-342677" server: "https://192.168.39.10:8443"
	I0805 23:42:02.841794   47040 api_server.go:166] Checking apiserver status ...
	I0805 23:42:02.841827   47040 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0805 23:42:02.855171   47040 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1189/cgroup
	W0805 23:42:02.864754   47040 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1189/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0805 23:42:02.864819   47040 ssh_runner.go:195] Run: ls
	I0805 23:42:02.869865   47040 api_server.go:253] Checking apiserver healthz at https://192.168.39.10:8443/healthz ...
	I0805 23:42:02.874140   47040 api_server.go:279] https://192.168.39.10:8443/healthz returned 200:
	ok
	I0805 23:42:02.874164   47040 status.go:422] multinode-342677 apiserver status = Running (err=<nil>)
	I0805 23:42:02.874177   47040 status.go:257] multinode-342677 status: &{Name:multinode-342677 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0805 23:42:02.874197   47040 status.go:255] checking status of multinode-342677-m02 ...
	I0805 23:42:02.874584   47040 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0805 23:42:02.874631   47040 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0805 23:42:02.889993   47040 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36839
	I0805 23:42:02.890410   47040 main.go:141] libmachine: () Calling .GetVersion
	I0805 23:42:02.890898   47040 main.go:141] libmachine: Using API Version  1
	I0805 23:42:02.890920   47040 main.go:141] libmachine: () Calling .SetConfigRaw
	I0805 23:42:02.891304   47040 main.go:141] libmachine: () Calling .GetMachineName
	I0805 23:42:02.891508   47040 main.go:141] libmachine: (multinode-342677-m02) Calling .GetState
	I0805 23:42:02.892954   47040 status.go:330] multinode-342677-m02 host status = "Running" (err=<nil>)
	I0805 23:42:02.892968   47040 host.go:66] Checking if "multinode-342677-m02" exists ...
	I0805 23:42:02.893287   47040 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0805 23:42:02.893333   47040 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0805 23:42:02.908416   47040 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39727
	I0805 23:42:02.908801   47040 main.go:141] libmachine: () Calling .GetVersion
	I0805 23:42:02.909231   47040 main.go:141] libmachine: Using API Version  1
	I0805 23:42:02.909249   47040 main.go:141] libmachine: () Calling .SetConfigRaw
	I0805 23:42:02.909542   47040 main.go:141] libmachine: () Calling .GetMachineName
	I0805 23:42:02.909761   47040 main.go:141] libmachine: (multinode-342677-m02) Calling .GetIP
	I0805 23:42:02.912651   47040 main.go:141] libmachine: (multinode-342677-m02) DBG | domain multinode-342677-m02 has defined MAC address 52:54:00:1b:95:d7 in network mk-multinode-342677
	I0805 23:42:02.913083   47040 main.go:141] libmachine: (multinode-342677-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1b:95:d7", ip: ""} in network mk-multinode-342677: {Iface:virbr1 ExpiryTime:2024-08-06 00:40:23 +0000 UTC Type:0 Mac:52:54:00:1b:95:d7 Iaid: IPaddr:192.168.39.89 Prefix:24 Hostname:multinode-342677-m02 Clientid:01:52:54:00:1b:95:d7}
	I0805 23:42:02.913109   47040 main.go:141] libmachine: (multinode-342677-m02) DBG | domain multinode-342677-m02 has defined IP address 192.168.39.89 and MAC address 52:54:00:1b:95:d7 in network mk-multinode-342677
	I0805 23:42:02.913267   47040 host.go:66] Checking if "multinode-342677-m02" exists ...
	I0805 23:42:02.913561   47040 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0805 23:42:02.913596   47040 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0805 23:42:02.928883   47040 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39875
	I0805 23:42:02.929370   47040 main.go:141] libmachine: () Calling .GetVersion
	I0805 23:42:02.929878   47040 main.go:141] libmachine: Using API Version  1
	I0805 23:42:02.929897   47040 main.go:141] libmachine: () Calling .SetConfigRaw
	I0805 23:42:02.930185   47040 main.go:141] libmachine: () Calling .GetMachineName
	I0805 23:42:02.930364   47040 main.go:141] libmachine: (multinode-342677-m02) Calling .DriverName
	I0805 23:42:02.930565   47040 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0805 23:42:02.930587   47040 main.go:141] libmachine: (multinode-342677-m02) Calling .GetSSHHostname
	I0805 23:42:02.933166   47040 main.go:141] libmachine: (multinode-342677-m02) DBG | domain multinode-342677-m02 has defined MAC address 52:54:00:1b:95:d7 in network mk-multinode-342677
	I0805 23:42:02.933511   47040 main.go:141] libmachine: (multinode-342677-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1b:95:d7", ip: ""} in network mk-multinode-342677: {Iface:virbr1 ExpiryTime:2024-08-06 00:40:23 +0000 UTC Type:0 Mac:52:54:00:1b:95:d7 Iaid: IPaddr:192.168.39.89 Prefix:24 Hostname:multinode-342677-m02 Clientid:01:52:54:00:1b:95:d7}
	I0805 23:42:02.933542   47040 main.go:141] libmachine: (multinode-342677-m02) DBG | domain multinode-342677-m02 has defined IP address 192.168.39.89 and MAC address 52:54:00:1b:95:d7 in network mk-multinode-342677
	I0805 23:42:02.933707   47040 main.go:141] libmachine: (multinode-342677-m02) Calling .GetSSHPort
	I0805 23:42:02.933863   47040 main.go:141] libmachine: (multinode-342677-m02) Calling .GetSSHKeyPath
	I0805 23:42:02.934096   47040 main.go:141] libmachine: (multinode-342677-m02) Calling .GetSSHUsername
	I0805 23:42:02.934347   47040 sshutil.go:53] new ssh client: &{IP:192.168.39.89 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19373-9606/.minikube/machines/multinode-342677-m02/id_rsa Username:docker}
	I0805 23:42:03.011169   47040 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0805 23:42:03.025270   47040 status.go:257] multinode-342677-m02 status: &{Name:multinode-342677-m02 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}
	I0805 23:42:03.025312   47040 status.go:255] checking status of multinode-342677-m03 ...
	I0805 23:42:03.025662   47040 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0805 23:42:03.025696   47040 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0805 23:42:03.041231   47040 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36581
	I0805 23:42:03.041682   47040 main.go:141] libmachine: () Calling .GetVersion
	I0805 23:42:03.042205   47040 main.go:141] libmachine: Using API Version  1
	I0805 23:42:03.042228   47040 main.go:141] libmachine: () Calling .SetConfigRaw
	I0805 23:42:03.042563   47040 main.go:141] libmachine: () Calling .GetMachineName
	I0805 23:42:03.042829   47040 main.go:141] libmachine: (multinode-342677-m03) Calling .GetState
	I0805 23:42:03.044553   47040 status.go:330] multinode-342677-m03 host status = "Stopped" (err=<nil>)
	I0805 23:42:03.044567   47040 status.go:343] host is not running, skipping remaining checks
	I0805 23:42:03.044573   47040 status.go:257] multinode-342677-m03 status: &{Name:multinode-342677-m03 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiNode/serial/StopNode (2.35s)

                                                
                                    
x
+
TestMultiNode/serial/StartAfterStop (38.71s)

                                                
                                                
=== RUN   TestMultiNode/serial/StartAfterStop
multinode_test.go:282: (dbg) Run:  out/minikube-linux-amd64 -p multinode-342677 node start m03 -v=7 --alsologtostderr
multinode_test.go:282: (dbg) Done: out/minikube-linux-amd64 -p multinode-342677 node start m03 -v=7 --alsologtostderr: (38.088811405s)
multinode_test.go:290: (dbg) Run:  out/minikube-linux-amd64 -p multinode-342677 status -v=7 --alsologtostderr
multinode_test.go:306: (dbg) Run:  kubectl get nodes
--- PASS: TestMultiNode/serial/StartAfterStop (38.71s)

                                                
                                    
x
+
TestMultiNode/serial/DeleteNode (2.26s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeleteNode
multinode_test.go:416: (dbg) Run:  out/minikube-linux-amd64 -p multinode-342677 node delete m03
multinode_test.go:416: (dbg) Done: out/minikube-linux-amd64 -p multinode-342677 node delete m03: (1.726445728s)
multinode_test.go:422: (dbg) Run:  out/minikube-linux-amd64 -p multinode-342677 status --alsologtostderr
multinode_test.go:436: (dbg) Run:  kubectl get nodes
multinode_test.go:444: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiNode/serial/DeleteNode (2.26s)

                                                
                                    
x
+
TestMultiNode/serial/RestartMultiNode (629.1s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartMultiNode
multinode_test.go:376: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-342677 --wait=true -v=8 --alsologtostderr --driver=kvm2  --container-runtime=crio
E0805 23:51:49.981754   16792 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19373-9606/.minikube/profiles/functional-299463/client.crt: no such file or directory
E0805 23:52:59.401527   16792 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19373-9606/.minikube/profiles/addons-435364/client.crt: no such file or directory
E0805 23:53:16.353738   16792 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19373-9606/.minikube/profiles/addons-435364/client.crt: no such file or directory
E0805 23:56:49.980894   16792 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19373-9606/.minikube/profiles/functional-299463/client.crt: no such file or directory
E0805 23:58:16.351346   16792 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19373-9606/.minikube/profiles/addons-435364/client.crt: no such file or directory
multinode_test.go:376: (dbg) Done: out/minikube-linux-amd64 start -p multinode-342677 --wait=true -v=8 --alsologtostderr --driver=kvm2  --container-runtime=crio: (10m28.560170165s)
multinode_test.go:382: (dbg) Run:  out/minikube-linux-amd64 -p multinode-342677 status --alsologtostderr
multinode_test.go:396: (dbg) Run:  kubectl get nodes
multinode_test.go:404: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiNode/serial/RestartMultiNode (629.10s)

                                                
                                    
x
+
TestMultiNode/serial/ValidateNameConflict (47.69s)

                                                
                                                
=== RUN   TestMultiNode/serial/ValidateNameConflict
multinode_test.go:455: (dbg) Run:  out/minikube-linux-amd64 node list -p multinode-342677
multinode_test.go:464: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-342677-m02 --driver=kvm2  --container-runtime=crio
multinode_test.go:464: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p multinode-342677-m02 --driver=kvm2  --container-runtime=crio: exit status 14 (65.730136ms)

                                                
                                                
-- stdout --
	* [multinode-342677-m02] minikube v1.33.1 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=19373
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/19373-9606/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/19373-9606/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! Profile name 'multinode-342677-m02' is duplicated with machine name 'multinode-342677-m02' in profile 'multinode-342677'
	X Exiting due to MK_USAGE: Profile name should be unique

                                                
                                                
** /stderr **
multinode_test.go:472: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-342677-m03 --driver=kvm2  --container-runtime=crio
E0806 00:01:33.027026   16792 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19373-9606/.minikube/profiles/functional-299463/client.crt: no such file or directory
E0806 00:01:49.982078   16792 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19373-9606/.minikube/profiles/functional-299463/client.crt: no such file or directory
multinode_test.go:472: (dbg) Done: out/minikube-linux-amd64 start -p multinode-342677-m03 --driver=kvm2  --container-runtime=crio: (46.374073146s)
multinode_test.go:479: (dbg) Run:  out/minikube-linux-amd64 node add -p multinode-342677
multinode_test.go:479: (dbg) Non-zero exit: out/minikube-linux-amd64 node add -p multinode-342677: exit status 80 (223.370075ms)

                                                
                                                
-- stdout --
	* Adding node m03 to cluster multinode-342677 as [worker]
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_NODE_ADD: failed to add node: Node multinode-342677-m03 already exists in multinode-342677-m03 profile
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_node_040ea7097fd6ed71e65be9a474587f81f0ccd21d_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
multinode_test.go:484: (dbg) Run:  out/minikube-linux-amd64 delete -p multinode-342677-m03
--- PASS: TestMultiNode/serial/ValidateNameConflict (47.69s)

                                                
                                    
x
+
TestScheduledStopUnix (115.28s)

                                                
                                                
=== RUN   TestScheduledStopUnix
scheduled_stop_test.go:128: (dbg) Run:  out/minikube-linux-amd64 start -p scheduled-stop-061986 --memory=2048 --driver=kvm2  --container-runtime=crio
E0806 00:08:16.353645   16792 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19373-9606/.minikube/profiles/addons-435364/client.crt: no such file or directory
scheduled_stop_test.go:128: (dbg) Done: out/minikube-linux-amd64 start -p scheduled-stop-061986 --memory=2048 --driver=kvm2  --container-runtime=crio: (43.7234426s)
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-061986 --schedule 5m
scheduled_stop_test.go:191: (dbg) Run:  out/minikube-linux-amd64 status --format={{.TimeToStop}} -p scheduled-stop-061986 -n scheduled-stop-061986
scheduled_stop_test.go:169: signal error was:  <nil>
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-061986 --schedule 15s
scheduled_stop_test.go:169: signal error was:  os: process already finished
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-061986 --cancel-scheduled
scheduled_stop_test.go:176: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p scheduled-stop-061986 -n scheduled-stop-061986
scheduled_stop_test.go:205: (dbg) Run:  out/minikube-linux-amd64 status -p scheduled-stop-061986
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-061986 --schedule 15s
scheduled_stop_test.go:169: signal error was:  os: process already finished
E0806 00:09:39.402313   16792 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19373-9606/.minikube/profiles/addons-435364/client.crt: no such file or directory
scheduled_stop_test.go:205: (dbg) Run:  out/minikube-linux-amd64 status -p scheduled-stop-061986
scheduled_stop_test.go:205: (dbg) Non-zero exit: out/minikube-linux-amd64 status -p scheduled-stop-061986: exit status 7 (64.570176ms)

                                                
                                                
-- stdout --
	scheduled-stop-061986
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
scheduled_stop_test.go:176: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p scheduled-stop-061986 -n scheduled-stop-061986
scheduled_stop_test.go:176: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p scheduled-stop-061986 -n scheduled-stop-061986: exit status 7 (61.784173ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
scheduled_stop_test.go:176: status error: exit status 7 (may be ok)
helpers_test.go:175: Cleaning up "scheduled-stop-061986" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p scheduled-stop-061986
--- PASS: TestScheduledStopUnix (115.28s)

                                                
                                    
x
+
TestRunningBinaryUpgrade (259.88s)

                                                
                                                
=== RUN   TestRunningBinaryUpgrade
=== PAUSE TestRunningBinaryUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestRunningBinaryUpgrade
version_upgrade_test.go:120: (dbg) Run:  /tmp/minikube-v1.26.0.1276783158 start -p running-upgrade-863913 --memory=2200 --vm-driver=kvm2  --container-runtime=crio
version_upgrade_test.go:120: (dbg) Done: /tmp/minikube-v1.26.0.1276783158 start -p running-upgrade-863913 --memory=2200 --vm-driver=kvm2  --container-runtime=crio: (2m10.542126578s)
version_upgrade_test.go:130: (dbg) Run:  out/minikube-linux-amd64 start -p running-upgrade-863913 --memory=2200 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio
version_upgrade_test.go:130: (dbg) Done: out/minikube-linux-amd64 start -p running-upgrade-863913 --memory=2200 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio: (1m55.696460844s)
helpers_test.go:175: Cleaning up "running-upgrade-863913" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p running-upgrade-863913
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p running-upgrade-863913: (11.007495384s)
--- PASS: TestRunningBinaryUpgrade (259.88s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoK8sWithVersion (0.07s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoK8sWithVersion
no_kubernetes_test.go:83: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-849515 --no-kubernetes --kubernetes-version=1.20 --driver=kvm2  --container-runtime=crio
no_kubernetes_test.go:83: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p NoKubernetes-849515 --no-kubernetes --kubernetes-version=1.20 --driver=kvm2  --container-runtime=crio: exit status 14 (69.151146ms)

                                                
                                                
-- stdout --
	* [NoKubernetes-849515] minikube v1.33.1 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=19373
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/19373-9606/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/19373-9606/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_USAGE: cannot specify --kubernetes-version with --no-kubernetes,
	to unset a global config run:
	
	$ minikube config unset kubernetes-version

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/StartNoK8sWithVersion (0.07s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithK8s (75.12s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithK8s
no_kubernetes_test.go:95: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-849515 --driver=kvm2  --container-runtime=crio
no_kubernetes_test.go:95: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-849515 --driver=kvm2  --container-runtime=crio: (1m14.878060213s)
no_kubernetes_test.go:200: (dbg) Run:  out/minikube-linux-amd64 -p NoKubernetes-849515 status -o json
--- PASS: TestNoKubernetes/serial/StartWithK8s (75.12s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithStopK8s (41.9s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithStopK8s
no_kubernetes_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-849515 --no-kubernetes --driver=kvm2  --container-runtime=crio
no_kubernetes_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-849515 --no-kubernetes --driver=kvm2  --container-runtime=crio: (40.632608411s)
no_kubernetes_test.go:200: (dbg) Run:  out/minikube-linux-amd64 -p NoKubernetes-849515 status -o json
no_kubernetes_test.go:200: (dbg) Non-zero exit: out/minikube-linux-amd64 -p NoKubernetes-849515 status -o json: exit status 2 (254.15671ms)

                                                
                                                
-- stdout --
	{"Name":"NoKubernetes-849515","Host":"Running","Kubelet":"Stopped","APIServer":"Stopped","Kubeconfig":"Configured","Worker":false}

                                                
                                                
-- /stdout --
no_kubernetes_test.go:124: (dbg) Run:  out/minikube-linux-amd64 delete -p NoKubernetes-849515
no_kubernetes_test.go:124: (dbg) Done: out/minikube-linux-amd64 delete -p NoKubernetes-849515: (1.010415573s)
--- PASS: TestNoKubernetes/serial/StartWithStopK8s (41.90s)

                                                
                                    
x
+
TestNoKubernetes/serial/Start (74.42s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/Start
no_kubernetes_test.go:136: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-849515 --no-kubernetes --driver=kvm2  --container-runtime=crio
E0806 00:11:49.980987   16792 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19373-9606/.minikube/profiles/functional-299463/client.crt: no such file or directory
no_kubernetes_test.go:136: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-849515 --no-kubernetes --driver=kvm2  --container-runtime=crio: (1m14.420703517s)
--- PASS: TestNoKubernetes/serial/Start (74.42s)

                                                
                                    
x
+
TestPause/serial/Start (128.57s)

                                                
                                                
=== RUN   TestPause/serial/Start
pause_test.go:80: (dbg) Run:  out/minikube-linux-amd64 start -p pause-161508 --memory=2048 --install-addons=false --wait=all --driver=kvm2  --container-runtime=crio
pause_test.go:80: (dbg) Done: out/minikube-linux-amd64 start -p pause-161508 --memory=2048 --install-addons=false --wait=all --driver=kvm2  --container-runtime=crio: (2m8.572990979s)
--- PASS: TestPause/serial/Start (128.57s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyK8sNotRunning (0.2s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyK8sNotRunning
no_kubernetes_test.go:147: (dbg) Run:  out/minikube-linux-amd64 ssh -p NoKubernetes-849515 "sudo systemctl is-active --quiet service kubelet"
no_kubernetes_test.go:147: (dbg) Non-zero exit: out/minikube-linux-amd64 ssh -p NoKubernetes-849515 "sudo systemctl is-active --quiet service kubelet": exit status 1 (195.597733ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/VerifyK8sNotRunning (0.20s)

                                                
                                    
x
+
TestNoKubernetes/serial/ProfileList (1.74s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/ProfileList
no_kubernetes_test.go:169: (dbg) Run:  out/minikube-linux-amd64 profile list
no_kubernetes_test.go:169: (dbg) Done: out/minikube-linux-amd64 profile list: (1.124432702s)
no_kubernetes_test.go:179: (dbg) Run:  out/minikube-linux-amd64 profile list --output=json
--- PASS: TestNoKubernetes/serial/ProfileList (1.74s)

                                                
                                    
x
+
TestNoKubernetes/serial/Stop (1.55s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/Stop
no_kubernetes_test.go:158: (dbg) Run:  out/minikube-linux-amd64 stop -p NoKubernetes-849515
no_kubernetes_test.go:158: (dbg) Done: out/minikube-linux-amd64 stop -p NoKubernetes-849515: (1.553281089s)
--- PASS: TestNoKubernetes/serial/Stop (1.55s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoArgs (47.55s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoArgs
no_kubernetes_test.go:191: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-849515 --driver=kvm2  --container-runtime=crio
E0806 00:13:16.352327   16792 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19373-9606/.minikube/profiles/addons-435364/client.crt: no such file or directory
no_kubernetes_test.go:191: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-849515 --driver=kvm2  --container-runtime=crio: (47.551818528s)
--- PASS: TestNoKubernetes/serial/StartNoArgs (47.55s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyK8sNotRunningSecond (0.2s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyK8sNotRunningSecond
no_kubernetes_test.go:147: (dbg) Run:  out/minikube-linux-amd64 ssh -p NoKubernetes-849515 "sudo systemctl is-active --quiet service kubelet"
no_kubernetes_test.go:147: (dbg) Non-zero exit: out/minikube-linux-amd64 ssh -p NoKubernetes-849515 "sudo systemctl is-active --quiet service kubelet": exit status 1 (203.804712ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/VerifyK8sNotRunningSecond (0.20s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Setup (2.66s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Setup
--- PASS: TestStoppedBinaryUpgrade/Setup (2.66s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Upgrade (123.1s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Upgrade
version_upgrade_test.go:183: (dbg) Run:  /tmp/minikube-v1.26.0.2904864017 start -p stopped-upgrade-936666 --memory=2200 --vm-driver=kvm2  --container-runtime=crio
version_upgrade_test.go:183: (dbg) Done: /tmp/minikube-v1.26.0.2904864017 start -p stopped-upgrade-936666 --memory=2200 --vm-driver=kvm2  --container-runtime=crio: (55.63108663s)
version_upgrade_test.go:192: (dbg) Run:  /tmp/minikube-v1.26.0.2904864017 -p stopped-upgrade-936666 stop
version_upgrade_test.go:192: (dbg) Done: /tmp/minikube-v1.26.0.2904864017 -p stopped-upgrade-936666 stop: (2.135516868s)
version_upgrade_test.go:198: (dbg) Run:  out/minikube-linux-amd64 start -p stopped-upgrade-936666 --memory=2200 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio
version_upgrade_test.go:198: (dbg) Done: out/minikube-linux-amd64 start -p stopped-upgrade-936666 --memory=2200 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio: (1m5.33226823s)
--- PASS: TestStoppedBinaryUpgrade/Upgrade (123.10s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/MinikubeLogs (0.9s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/MinikubeLogs
version_upgrade_test.go:206: (dbg) Run:  out/minikube-linux-amd64 logs -p stopped-upgrade-936666
E0806 00:16:49.981208   16792 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19373-9606/.minikube/profiles/functional-299463/client.crt: no such file or directory
--- PASS: TestStoppedBinaryUpgrade/MinikubeLogs (0.90s)

                                                
                                    

Test skip (37/230)

Order skiped test Duration
5 TestDownloadOnly/v1.20.0/cached-images 0
6 TestDownloadOnly/v1.20.0/binaries 0
7 TestDownloadOnly/v1.20.0/kubectl 0
14 TestDownloadOnly/v1.30.3/cached-images 0
15 TestDownloadOnly/v1.30.3/binaries 0
16 TestDownloadOnly/v1.30.3/kubectl 0
23 TestDownloadOnly/v1.31.0-rc.0/cached-images 0
24 TestDownloadOnly/v1.31.0-rc.0/binaries 0
25 TestDownloadOnly/v1.31.0-rc.0/kubectl 0
29 TestDownloadOnlyKic 0
38 TestAddons/serial/Volcano 0
47 TestAddons/parallel/Olm 0
57 TestDockerFlags 0
60 TestDockerEnvContainerd 0
62 TestHyperKitDriverInstallOrUpdate 0
63 TestHyperkitDriverSkipUpgrade 0
114 TestFunctional/parallel/DockerEnv 0
115 TestFunctional/parallel/PodmanEnv 0
126 TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel 0.01
127 TestFunctional/parallel/TunnelCmd/serial/StartTunnel 0.01
128 TestFunctional/parallel/TunnelCmd/serial/WaitService 0.01
129 TestFunctional/parallel/TunnelCmd/serial/AccessDirect 0.01
130 TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig 0.01
131 TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil 0.01
132 TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS 0.01
133 TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel 0.01
163 TestGvisorAddon 0
185 TestImageBuild 0
212 TestKicCustomNetwork 0
213 TestKicExistingNetwork 0
214 TestKicCustomSubnet 0
215 TestKicStaticIP 0
247 TestChangeNoneUser 0
250 TestScheduledStopWindows 0
252 TestSkaffold 0
254 TestInsufficientStorage 0
258 TestMissingContainerUpgrade 0
x
+
TestDownloadOnly/v1.20.0/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/cached-images
aaa_download_only_test.go:129: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.20.0/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/binaries
aaa_download_only_test.go:151: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.20.0/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/kubectl
aaa_download_only_test.go:167: Test for darwin and windows
--- SKIP: TestDownloadOnly/v1.20.0/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.30.3/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.30.3/cached-images
aaa_download_only_test.go:129: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.30.3/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.30.3/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.30.3/binaries
aaa_download_only_test.go:151: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.30.3/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.30.3/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.30.3/kubectl
aaa_download_only_test.go:167: Test for darwin and windows
--- SKIP: TestDownloadOnly/v1.30.3/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.0-rc.0/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.0-rc.0/cached-images
aaa_download_only_test.go:129: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.31.0-rc.0/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.0-rc.0/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.0-rc.0/binaries
aaa_download_only_test.go:151: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.31.0-rc.0/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.0-rc.0/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.0-rc.0/kubectl
aaa_download_only_test.go:167: Test for darwin and windows
--- SKIP: TestDownloadOnly/v1.31.0-rc.0/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnlyKic (0s)

                                                
                                                
=== RUN   TestDownloadOnlyKic
aaa_download_only_test.go:220: skipping, only for docker or podman driver
--- SKIP: TestDownloadOnlyKic (0.00s)

                                                
                                    
x
+
TestAddons/serial/Volcano (0s)

                                                
                                                
=== RUN   TestAddons/serial/Volcano
addons_test.go:879: skipping: crio not supported
--- SKIP: TestAddons/serial/Volcano (0.00s)

                                                
                                    
x
+
TestAddons/parallel/Olm (0s)

                                                
                                                
=== RUN   TestAddons/parallel/Olm
=== PAUSE TestAddons/parallel/Olm

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Olm
addons_test.go:500: Skipping OLM addon test until https://github.com/operator-framework/operator-lifecycle-manager/issues/2534 is resolved
--- SKIP: TestAddons/parallel/Olm (0.00s)

                                                
                                    
x
+
TestDockerFlags (0s)

                                                
                                                
=== RUN   TestDockerFlags
docker_test.go:41: skipping: only runs with docker container runtime, currently testing crio
--- SKIP: TestDockerFlags (0.00s)

                                                
                                    
x
+
TestDockerEnvContainerd (0s)

                                                
                                                
=== RUN   TestDockerEnvContainerd
docker_test.go:170: running with crio false linux amd64
docker_test.go:172: skipping: TestDockerEnvContainerd can only be run with the containerd runtime on Docker driver
--- SKIP: TestDockerEnvContainerd (0.00s)

                                                
                                    
x
+
TestHyperKitDriverInstallOrUpdate (0s)

                                                
                                                
=== RUN   TestHyperKitDriverInstallOrUpdate
driver_install_or_update_test.go:105: Skip if not darwin.
--- SKIP: TestHyperKitDriverInstallOrUpdate (0.00s)

                                                
                                    
x
+
TestHyperkitDriverSkipUpgrade (0s)

                                                
                                                
=== RUN   TestHyperkitDriverSkipUpgrade
driver_install_or_update_test.go:169: Skip if not darwin.
--- SKIP: TestHyperkitDriverSkipUpgrade (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/DockerEnv (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/DockerEnv
=== PAUSE TestFunctional/parallel/DockerEnv

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DockerEnv
functional_test.go:459: only validate docker env with docker container runtime, currently testing crio
--- SKIP: TestFunctional/parallel/DockerEnv (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/PodmanEnv (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/PodmanEnv
=== PAUSE TestFunctional/parallel/PodmanEnv

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PodmanEnv
functional_test.go:546: only validate podman env with docker container runtime, currently testing crio
--- SKIP: TestFunctional/parallel/PodmanEnv (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/StartTunnel
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/WaitService (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/WaitService
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/WaitService (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessDirect
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.01s)

                                                
                                    
x
+
TestGvisorAddon (0s)

                                                
                                                
=== RUN   TestGvisorAddon
gvisor_addon_test.go:34: skipping test because --gvisor=false
--- SKIP: TestGvisorAddon (0.00s)

                                                
                                    
x
+
TestImageBuild (0s)

                                                
                                                
=== RUN   TestImageBuild
image_test.go:33: 
--- SKIP: TestImageBuild (0.00s)

                                                
                                    
x
+
TestKicCustomNetwork (0s)

                                                
                                                
=== RUN   TestKicCustomNetwork
kic_custom_network_test.go:34: only runs with docker driver
--- SKIP: TestKicCustomNetwork (0.00s)

                                                
                                    
x
+
TestKicExistingNetwork (0s)

                                                
                                                
=== RUN   TestKicExistingNetwork
kic_custom_network_test.go:73: only runs with docker driver
--- SKIP: TestKicExistingNetwork (0.00s)

                                                
                                    
x
+
TestKicCustomSubnet (0s)

                                                
                                                
=== RUN   TestKicCustomSubnet
kic_custom_network_test.go:102: only runs with docker/podman driver
--- SKIP: TestKicCustomSubnet (0.00s)

                                                
                                    
x
+
TestKicStaticIP (0s)

                                                
                                                
=== RUN   TestKicStaticIP
kic_custom_network_test.go:123: only run with docker/podman driver
--- SKIP: TestKicStaticIP (0.00s)

                                                
                                    
x
+
TestChangeNoneUser (0s)

                                                
                                                
=== RUN   TestChangeNoneUser
none_test.go:38: Test requires none driver and SUDO_USER env to not be empty
--- SKIP: TestChangeNoneUser (0.00s)

                                                
                                    
x
+
TestScheduledStopWindows (0s)

                                                
                                                
=== RUN   TestScheduledStopWindows
scheduled_stop_test.go:42: test only runs on windows
--- SKIP: TestScheduledStopWindows (0.00s)

                                                
                                    
x
+
TestSkaffold (0s)

                                                
                                                
=== RUN   TestSkaffold
skaffold_test.go:45: skaffold requires docker-env, currently testing crio container runtime
--- SKIP: TestSkaffold (0.00s)

                                                
                                    
x
+
TestInsufficientStorage (0s)

                                                
                                                
=== RUN   TestInsufficientStorage
status_test.go:38: only runs with docker driver
--- SKIP: TestInsufficientStorage (0.00s)

                                                
                                    
x
+
TestMissingContainerUpgrade (0s)

                                                
                                                
=== RUN   TestMissingContainerUpgrade
version_upgrade_test.go:284: This test is only for Docker
--- SKIP: TestMissingContainerUpgrade (0.00s)

                                                
                                    
Copied to clipboard